From sorrison at gmail.com Mon Oct 1 05:41:34 2018 From: sorrison at gmail.com (Sam Morrison) Date: Mon, 1 Oct 2018 15:41:34 +1000 Subject: [Openstack-operators] [neutron][lbaas][neutron-lbaas][octavia] Update on the previously announced deprecation of neutron-lbaas and neutron-lbaas-dashboard In-Reply-To: References: Message-ID: <18CD3109-FCB7-4D04-A23D-C4E2FAAE3AC4@gmail.com> Hi Michael, Are all the backends that are supported by lbaas supported by octavia? I can’t see a page that lists the supported backends. Eg. We use lbaas with the midonet driver and I can’t tell if this will still work when switching over? Thanks, Sam > On 29 Sep 2018, at 8:07 am, Michael Johnson wrote: > > During the Queens release cycle we announced the deprecation of > neutron-lbaas and neutron-lbaas-dashboard[1]. > > Today we are announcing the expected end date for the neutron-lbaas > and neutron-lbaas-dashboard deprecation cycles. During September 2019 > or the start of the “U” OpenStack release cycle, whichever comes > first, neutron-lbaas and neutron-lbaas-dashboard will be retired. This > means the code will be be removed and will not be released as part of > the "U" OpenStack release per the infrastructure team’s “retiring a > project” process[2]. > > We continue to maintain a Frequently Asked Questions (FAQ) wiki page > to help answer additional questions you may have about this process: > https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation > > For more information or if you have additional questions, please see > the following resources: > > The FAQ: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation > > The Octavia documentation: https://docs.openstack.org/octavia/latest/ > > Reach out to us via IRC on the Freenode IRC network, channel #openstack-lbaas > > Weekly Meeting: 20:00 UTC on Wednesdays in #openstack-lbaas on the > Freenode IRC network. > > Sending email to the OpenStack developer mailing list: openstack-dev > [at] lists [dot] openstack [dot] org. Please prefix the subject with > '[openstack-dev][Octavia]' > > Thank you for your support and patience during this transition, > > Michael Johnson > Octavia PTL > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126836.html > [2] https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From sfinucan at redhat.com Mon Oct 1 10:52:43 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 01 Oct 2018 11:52:43 +0100 Subject: [Openstack-operators] [nova] nova-xvpvncproxy CLI options Message-ID: <10291bb258d2d4c32d8e9baefae8254bc83cc08c.camel@redhat.com> tl;dr: Is anyone calling 'nova-novncproxy' or 'nova-serialproxy' with CLI arguments instead of a configuration file? I've been doing some untangling of the console proxy services that nova provides and trying to clean up the documentation for same [1]. As part of these fixes, I noted a couple of inconsistencies in how we manage the CLI options for these services. Firstly, the 'nova-novncproxy' and 'nova-serialproxy' services accept CLI configuration options while the 'nova-xvpvncproxy' service does not. $ nova-novncproxy --help usage: nova-novncproxy [-h] [--vnc-auth_schemes VNC_AUTH_SCHEMES] [--vnc-novncproxy_host VNC_NOVNCPROXY_HOST] [--vnc-novncproxy_port VNC_NOVNCPROXY_PORT] [--vnc-vencrypt_ca_certs VNC_VENCRYPT_CA_CERTS] [--vnc-vencrypt_client_cert VNC_VENCRYPT_CLIENT_CERT] [--vnc-vencrypt_client_key VNC_VENCRYPT_CLIENT_KEY] [--cert CERT] [--config-dir DIR] [--config-file PATH] ... [--remote_debug-port REMOTE_DEBUG_PORT] $ nova-xvpvncproxy --help usage: nova-xvpvncproxy [-h] [--remote_debug-host REMOTE_DEBUG_HOST] [--remote_debug-port REMOTE_DEBUG_PORT] ... [--version] [--watch-log-file] This means that you could, conceivably, run 'nova-novncproxy' without a configuration file but the same would not be possible with the 'nova- xvpvncproxy' service. Secondly, the 'nova-novncproxy' CLI options are added to a 'vnc' group, meaning they appear with an unnecessary 'vnc-' prefix (e.g. '--vnc-novncproxy_host'), and the 'nova-serialproxy' CLI options are prefixed with 'serial-' for the same reason. Finally, none of these options are documented anywhere. My initial plan [2] to resolve all of the above had been to add the CLI options to the 'nova-xvpvncproxy' service and then go figure out how to get oslo.config autodocumenting these for us in our man pages. However, a quick search through GitHub, codesearch.o.o and Google turned up no hits so I wonder if anyone is configuring these things by CLI? If not, maybe we should just go and remove this code and insist on configuration via config file? Cheers, Stephen [1] https://review.openstack.org/606148 [2] https://review.openstack.org/606929 From gmann at ghanshyammann.com Mon Oct 1 13:13:30 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 01 Oct 2018 22:13:30 +0900 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> ---- On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad wrote ---- > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > wrote: > > > > Ideally I would like to see it in the form of least specific to most specific. But more importantly in a way that there is no additional delimiters between the service type and the resource. Finally, I do not like the change of plurality depending on action type. > > > > I propose we consider > > > > ::[:] > > > > Example for keystone (note, action names below are strictly examples I am fine with whatever form those actions take): > > identity:projects:create > > identity:projects:delete > > identity:projects:list > > identity:projects:get > > > > It keeps things simple and consistent when you're looking through overrides / defaults. > > --Morgan > +1 -- I think the ordering if `resource` comes before > `action|subaction` will be more clean. > > ++ > These are excellent points. I especially like being able to omit the convention about plurality. Furthermore, I'd like to add that I think we should make the resource singular (e.g., project instead or projects). For example: > compute:server:list > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize (or confirm-resize) Do we need "action" word there? I think action name itself should convey the operation. IMO below notation without "äction" word looks clear enough. what you say? compute:server:reboot compute:server:confirm_resize -gmann > > Otherwise, someone might mistake compute:servers:get, as "list". This is ultra-nick-picky, but something I thought of when seeing the usage of "get_all" in policy names in favor of "list." > In summary, the new convention based on the most recent feedback should be: > ::[:] > Rules:service-type is always defined in the service types authority > resources are always singular > Thanks to all for sticking through this tedious discussion. I appreciate it. > /R > > Harry > > > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad wrote: > >> > >> Bumping this thread again and proposing two conventions based on the discussion here. I propose we decide on one of the two following conventions: > >> > >> :: > >> > >> or > >> > >> :_ > >> > >> Where is the corresponding service type of the project [0], and is either create, get, list, update, or delete. I think decoupling the method from the policy name should aid in consistency, regardless of the underlying implementation. The HTTP method specifics can still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > >> > >> I think the plurality of the resource should default to what makes sense for the operation being carried out (e.g., list:foobars, create:foobar). > >> > >> I don't mind the first one because it's clear about what the delimiter is and it doesn't look weird when projects have something like: > >> > >> ::: > >> > >> If folks are ok with this, I can start working on some documentation that explains the motivation for this. Afterward, we can figure out how we want to track this work. > >> > >> What color do you want the shed to be? > >> > >> [0] https://service-types.openstack.org/service-types.json > >> [1] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > >> > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad wrote: > >>> > >>> > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann wrote: > >>>> > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt wrote ---- > >>>> > tl;dr+1 consistent names > >>>> > I would make the names mirror the API... because the Operator setting them knows the API, not the codeIgnore the crazy names in Nova, I certainly hate them > >>>> > >>>> Big +1 on consistent naming which will help operator as well as developer to maintain those. > >>>> > >>>> > > >>>> > Lance Bragstad wrote: > >>>> > > I'm curious if anyone has context on the "os-" part of the format? > >>>> > > >>>> > My memory of the Nova policy mess...* Nova's policy rules traditionally followed the patterns of the code > >>>> > ** Yes, horrible, but it happened.* The code used to have the OpenStack API and the EC2 API, hence the "os"* API used to expand with extensions, so the policy name is often based on extensions** note most of the extension code has now gone, including lots of related policies* Policy in code was focused on getting us to a place where we could rename policy** Whoop whoop by the way, it feels like we are really close to something sensible now! > >>>> > Lance Bragstad wrote: > >>>> > Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? > >>>> > I could go either way as I think about "list servers" in the API.But my preference is for the URL stub and POST, GET, etc. > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote:If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). > >>>> > +1The API is known as "compute" in api-ref, so the policy should be for "compute", etc. > >>>> > >>>> Agree on mapping the policy name with api-ref as much as possible. Other than policy name having 'os-', we have 'os-' in resource name also in nova API url like /os-agents, /os-aggregates etc (almost every resource except servers , flavors). As we cannot get rid of those from API url, we need to keep the same in policy naming too? or we can have policy name like compute:agents:create/post but that mismatch from api-ref where agents resource url is os-agents. > >>> > >>> > >>> Good question. I think this depends on how the service does policy enforcement. > >>> > >>> I know we did something like this in keystone, which required policy names and method names to be the same: > >>> > >>> "identity:list_users": "..." > >>> > >>> Because the initial implementation of policy enforcement used a decorator like this: > >>> > >>> from keystone import controller > >>> > >>> @controller.protected > >>> def list_users(self): > >>> ... > >>> > >>> Having the policy name the same as the method name made it easier for the decorator implementation to resolve the policy needed to protect the API because it just looked at the name of the wrapped method. The advantage was that it was easy to implement new APIs because you only needed to add a policy, implement the method, and make sure you decorate the implementation. > >>> > >>> While this worked, we are moving away from it entirely. The decorator implementation was ridiculously complicated. Only a handful of keystone developers understood it. With the addition of system-scope, it would have only become more convoluted. It also enables a much more copy-paste pattern (e.g., so long as I wrap my method with this decorator implementation, things should work right?). Instead, we're calling enforcement within the controller implementation to ensure things are easier to understand. It requires developers to be cognizant of how different token types affect the resources within an API. That said, coupling the policy name to the method name is no longer a requirement for keystone. > >>> > >>> Hopefully, that helps explain why we needed them to match. > >>> > >>>> > >>>> > >>>> Also we have action API (i know from nova not sure from other services) like POST /servers/{server_id}/action {addSecurityGroup} and their current policy name is all inconsistent. few have policy name including their resource name like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in policy name like "os_compute_api:os-admin-actions:reset_state" and few has direct action name like "os_compute_api:os-console-output" > >>> > >>> > >>> Since the actions API relies on the request body and uses a single HTTP method, does it make sense to have the HTTP method in the policy name? It feels redundant, and we might be able to establish a convention that's more meaningful for things like action APIs. It looks like cinder has a similar pattern [0]. > >>> > >>> [0] https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > >>> > >>>> > >>>> > >>>> May be we can make them consistent with :: or any better opinion. > >>>> > >>>> > From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. > >>>> > > >>>> > I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: > >>>> > * move to new consistent policy name, deprecate existing name* hardcode scope check to project, system or user** (user, yes... keypairs, yuck, but its how they work)** deprecate in rule scope checks, which are largely bogus in Nova anyway* make read/write/admin distinction** therefore adding the "noop" role, amount other things > >>>> > >>>> + policy granularity. > >>>> > >>>> It is good idea to make the policy improvement all together and for all rules as you mentioned. But my worries is how much load it will be on operator side to migrate all policy rules at same time? What will be the deprecation period etc which i think we can discuss on proposed spec - https://review.openstack.org/#/c/547850 > >>> > >>> > >>> Yeah, that's another valid concern. I know at least one operator has weighed in already. I'm curious if operators have specific input here. > >>> > >>> It ultimately depends on if they override existing policies or not. If a deployment doesn't have any overrides, it should be a relatively simple change for operators to consume. > >>> > >>>> > >>>> > >>>> > >>>> -gmann > >>>> > >>>> > Thanks,John __________________________________________________________________________ > >>>> > OpenStack Development Mailing List (not for usage questions) > >>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > > >>>> > >>>> > >>>> > >>>> __________________________________________________________________________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Mon Oct 1 13:14:06 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 01 Oct 2018 22:14:06 +0900 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <20180928203318.GA3769@sm-workstation> Message-ID: <1662fc3b44e.10c358bd832277.7078958170608467364@ghanshyammann.com> ---- On Sat, 29 Sep 2018 07:23:30 +0900 Lance Bragstad wrote ---- > Alright - I've worked up the majority of what we have in this thread and proposed a documentation patch for oslo.policy [0]. > I think we're at the point where we can finish the rest of this discussion in gerrit if folks are ok with that. > [0] https://review.openstack.org/#/c/606214/ +1, thanks for that. let's start the discussion there. -gmann > On Fri, Sep 28, 2018 at 3:33 PM Sean McGinnis wrote: > On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote: > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: > > > > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > > wrote: > > > > > > > > Ideally I would like to see it in the form of least specific to most > > > specific. But more importantly in a way that there is no additional > > > delimiters between the service type and the resource. Finally, I do not > > > like the change of plurality depending on action type. > > > > > > > > I propose we consider > > > > > > > > ::[:] > > > > > > > > Example for keystone (note, action names below are strictly examples I > > > am fine with whatever form those actions take): > > > > identity:projects:create > > > > identity:projects:delete > > > > identity:projects:list > > > > identity:projects:get > > > > > > > > It keeps things simple and consistent when you're looking through > > > overrides / defaults. > > > > --Morgan > > > +1 -- I think the ordering if `resource` comes before > > > `action|subaction` will be more clean. > > > > > > > Great idea. This is looking better and better. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From johnsomor at gmail.com Mon Oct 1 16:19:16 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 1 Oct 2018 09:19:16 -0700 Subject: [Openstack-operators] [neutron][lbaas][neutron-lbaas][octavia] Update on the previously announced deprecation of neutron-lbaas and neutron-lbaas-dashboard In-Reply-To: <18CD3109-FCB7-4D04-A23D-C4E2FAAE3AC4@gmail.com> References: <18CD3109-FCB7-4D04-A23D-C4E2FAAE3AC4@gmail.com> Message-ID: Hi Sam, We worked with the providers[1] to develop the new provider driver specification for Octavia and from that process the existing drivers will need some modifications. Our hope is this new specification will speed up the availability of vendor driver updates by empowering the providers to manage their own patches and releases. I have created this page to help track the available provider drivers: https://docs.openstack.org/octavia/latest/admin/providers.html To date, there are two drivers (Octavia Amphroa and OVN) available and two third party providers are almost ready with their drivers. Unfortunately I do not have any information to share about the MidoNet provider. I recommend you contact them directly to understand their plans for migrating to Octavia. Michael [1] https://review.openstack.org/#/c/509957/ On Sun, Sep 30, 2018 at 10:41 PM Sam Morrison wrote: > > Hi Michael, > > Are all the backends that are supported by lbaas supported by octavia? I can’t see a page that lists the supported backends. > > Eg. We use lbaas with the midonet driver and I can’t tell if this will still work when switching over? > > > Thanks, > Sam > > > > > On 29 Sep 2018, at 8:07 am, Michael Johnson wrote: > > > > During the Queens release cycle we announced the deprecation of > > neutron-lbaas and neutron-lbaas-dashboard[1]. > > > > Today we are announcing the expected end date for the neutron-lbaas > > and neutron-lbaas-dashboard deprecation cycles. During September 2019 > > or the start of the “U” OpenStack release cycle, whichever comes > > first, neutron-lbaas and neutron-lbaas-dashboard will be retired. This > > means the code will be be removed and will not be released as part of > > the "U" OpenStack release per the infrastructure team’s “retiring a > > project” process[2]. > > > > We continue to maintain a Frequently Asked Questions (FAQ) wiki page > > to help answer additional questions you may have about this process: > > https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation > > > > For more information or if you have additional questions, please see > > the following resources: > > > > The FAQ: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation > > > > The Octavia documentation: https://docs.openstack.org/octavia/latest/ > > > > Reach out to us via IRC on the Freenode IRC network, channel #openstack-lbaas > > > > Weekly Meeting: 20:00 UTC on Wednesdays in #openstack-lbaas on the > > Freenode IRC network. > > > > Sending email to the OpenStack developer mailing list: openstack-dev > > [at] lists [dot] openstack [dot] org. Please prefix the subject with > > '[openstack-dev][Octavia]' > > > > Thank you for your support and patience during this transition, > > > > Michael Johnson > > Octavia PTL > > > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126836.html > > [2] https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From jp.methot at planethoster.info Mon Oct 1 21:51:55 2018 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Mon, 1 Oct 2018 17:51:55 -0400 Subject: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup In-Reply-To: <42B3CC72-9B2E-47C6-A18F-6FAD60E1FAEF@planethoster.info> References: <19C41B45-CD0F-48CD-A350-1C03A61493D7@planethoster.info> <0351FCF1-DAAD-4954-83A5-502AA567D581@planethoster.info> <09ABAF86-5B29-4C99-8174-A5C200BFB0EB@planethoster.info> <3EC9A862-7474-48AA-B1B2-473C7F656A36@redhat.com> <42B3CC72-9B2E-47C6-A18F-6FAD60E1FAEF@planethoster.info> Message-ID: So, after some testing, we finally fixed our issue of lost connections to instances. The actual issue was that the ARP table on the network node was hitting its limit constantly and thus, discarding legitimate routes. This caused our connections to flap and the HA routers to switch node without warning. Increasing net.ipv4.neigh.default.gc_thresh1, net.ipv4.neigh.default.gc_thresh2 and net.ipv4.neigh.default.gc_thresh3 kernel values ended up fixing the issue. Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. > Le 28 sept. 2018 à 10:53, Jean-Philippe Méthot a écrit : > > Thank you, I will try it next week (since today is Friday) and update this thread if it has fixed my issues. We are indeed using the latest RDO Pike, so ovsdbapp 0.4.3.1 . > > Jean-Philippe Méthot > Openstack system administrator > Administrateur système Openstack > PlanetHoster inc. > > > > >> Le 28 sept. 2018 à 03:03, Slawomir Kaplonski > a écrit : >> >> Hi, >> >> What version of Neutron and ovsdbapp You are using? IIRC there was such issue somewhere around Pike version, we saw it in functional tests quite often. But later with new ovsdbapp version I think that this problem was somehow solved. >> Maybe try newer version of ovsdbapp and check if it will be better. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Oct 1 23:57:30 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 1 Oct 2018 16:57:30 -0700 Subject: [Openstack-operators] Berlin Community Contributor Awards In-Reply-To: References: Message-ID: Hello :) I wanted to bring this to the top of people's inboxes as we have three weeks left to submit community members[1]. I can think of a dozen people right now that deserve an award and I am sure you all could do the same. It only takes a few minutes and its an easy way to make sure they get the recognition they deserve. Show your appreciation and nominate one person. -Kendall (diablo_rojo) [1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas On Fri, Aug 24, 2018 at 11:15 AM Kendall Nelson wrote: > Hello Everyone! > > As we approach the Summit (still a ways away thankfully), I thought I > would kick off the Community Contributor Award nominations early this > round. > > For those of you that already know what they are, here is the form[1]. > > For those of you that have never heard of the CCA, I'll briefly explain > what they are :) We all know people in the community that do the dirty > jobs, we all know people that will bend over backwards trying to help > someone new, we all know someone that is a savant in some area of the code > we could never hope to understand. These people rarely get the thanks they > deserve and the Community Contributor Awards are a chance to make sure they > know that they are appreciated for the amazing work they do and skills they > have. > > So go forth and nominate these amazing community members[1]! Nominations > will close on October 21st at 7:00 UTC and winners will be announced at the > OpenStack Summit in Berlin. > > -Kendall (diablo_rojo) > > [1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Oct 2 15:15:05 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 2 Oct 2018 11:15:05 -0400 Subject: [Openstack-operators] Ops Meetup Team meeting 2018/10/2 Message-ID: We had a good meeting on IRC today, minutes below. Current focus is on the Forum event at the upcoming OpenStack Summit in Berlin in November. We are going to try and pull together a social events for openstack operators on the Tuesday night after the marketplace mixer. Further items under discussion include the first inter-summit meetup, which is likely to be in europe in early march and will most likely feature a research track, early discussions about the first 2019 summit, forum, ptg event and finally target region for the second meetup (likely to be north america). if you'd like to hear more, get involved or submit your suggestions for future OpenStack Operators related events, please get in touch! Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-02-14.02.html Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-02-14.02.txt Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-02-14.02.log.html Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Tue Oct 2 21:24:18 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Tue, 2 Oct 2018 23:24:18 +0200 Subject: [Openstack-operators] [publiccloud-wg] Reminder weekly meeting Public Cloud WG Message-ID: Hi everyone, Time for a new meeting for PCWG - 3rd October 0700 UTC in #openstack-publiccloud! Agenda found at https://etherpad.openstack.org/p/publiccloud-wg Talk to you in a couple of hours! Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From melwittt at gmail.com Wed Oct 3 22:07:38 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 3 Oct 2018 15:07:38 -0700 Subject: [Openstack-operators] [nova][xenapi] can we deprecate the xenapi-specific 'nova-console' service? Message-ID: <8682e817-4ba9-8f76-173d-619896050176@gmail.com> Greetings Devs and Ops, Today I noticed that our code does not handle the 'nova-console' service properly in a multi-cell deployment and given that no one has complained or reported bugs about it, we're wondering if anyone still uses the nova-console service. The documentation [1] says that the nova-console service is a "XenAPI-specific service that most recent VNC proxy architectures do not use." Can anyone from xenapi land shed some light on whether the nova-console service is still useful in deployments using the xenapi driver, or is it an old relic that we should deprecate and remove? Thanks for your help, -melanie [1] https://docs.openstack.org/nova/latest/admin/remote-console-access.html From bob.ball at citrix.com Thu Oct 4 12:03:41 2018 From: bob.ball at citrix.com (Bob Ball) Date: Thu, 4 Oct 2018 12:03:41 +0000 Subject: [Openstack-operators] [openstack-dev] [nova][xenapi] can we deprecate the xenapi-specific 'nova-console' service? In-Reply-To: <8682e817-4ba9-8f76-173d-619896050176@gmail.com> References: <8682e817-4ba9-8f76-173d-619896050176@gmail.com> Message-ID: <0233724203a34d82a08b567a79a1f4a5@AMSPEX02CL01.citrite.net> Hi Melanie, We recommend using novncproxy_base_url with vncserver_proxyclient_address set to the dom0's management IP address. We don't currently use nova-console, so deprecation would be the best approach. Thanks, Bob -----Original Message----- From: melanie witt [mailto:melwittt at gmail.com] Sent: 03 October 2018 23:08 To: OpenStack Development Mailing List (not for usage questions) ; openstack-operators at lists.openstack.org Subject: [openstack-dev] [nova][xenapi] can we deprecate the xenapi-specific 'nova-console' service? Greetings Devs and Ops, Today I noticed that our code does not handle the 'nova-console' service properly in a multi-cell deployment and given that no one has complained or reported bugs about it, we're wondering if anyone still uses the nova-console service. The documentation [1] says that the nova-console service is a "XenAPI-specific service that most recent VNC proxy architectures do not use." Can anyone from xenapi land shed some light on whether the nova-console service is still useful in deployments using the xenapi driver, or is it an old relic that we should deprecate and remove? Thanks for your help, -melanie [1] https://docs.openstack.org/nova/latest/admin/remote-console-access.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sfinucan at redhat.com Thu Oct 4 12:57:52 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 04 Oct 2018 13:57:52 +0100 Subject: [Openstack-operators] [openstack-dev] [nova][xenapi] can we deprecate the xenapi-specific 'nova-console' service? In-Reply-To: <0233724203a34d82a08b567a79a1f4a5@AMSPEX02CL01.citrite.net> References: <8682e817-4ba9-8f76-173d-619896050176@gmail.com> <0233724203a34d82a08b567a79a1f4a5@AMSPEX02CL01.citrite.net> Message-ID: On Thu, 2018-10-04 at 12:03 +0000, Bob Ball wrote: > Hi Melanie, > > We recommend using novncproxy_base_url with > vncserver_proxyclient_address set to the dom0's management IP > address. > > We don't currently use nova-console, so deprecation would be the best > approach. > > Thanks, > > Bob What about nova-xvpvncproxy [1]? This would be configured using xvpvncproxy_base_url. This is also Xen-specific (as the name, Xen VNC Proxy, would suggest). If the noVNC-based console is now recommended, can we also deprecate the XVP one? Stephen [1] https://review.openstack.org/#/c/606148/5/doc/source/admin/remote-console-access.rst at 313 > -----Original Message----- > From: melanie witt [mailto:melwittt at gmail.com] > Sent: 03 October 2018 23:08 > To: OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org>; > openstack-operators at lists.openstack.org > Subject: [openstack-dev] [nova][xenapi] can we deprecate the xenapi- > specific 'nova-console' service? > > Greetings Devs and Ops, > > Today I noticed that our code does not handle the 'nova-console' > service properly in a multi-cell deployment and given that no one has > complained or reported bugs about it, we're wondering if anyone still > uses the nova-console service. The documentation [1] says that the > nova-console service is a "XenAPI-specific service that most recent > VNC proxy architectures do not use." > > Can anyone from xenapi land shed some light on whether the nova- > console service is still useful in deployments using the xenapi > driver, or is it an old relic that we should deprecate and remove? > > Thanks for your help, > -melanie > > [1] > https://docs.openstack.org/nova/latest/admin/remote-console-access.html class="Apple-tab-span" style="white-space:pre"> > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tobias.urdin at binero.se Fri Oct 5 12:36:09 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 5 Oct 2018 14:36:09 +0200 Subject: [Openstack-operators] [puppet] Heads up for changes causing restarts! Message-ID: <7e95fa90-448d-cdd2-5541-2187f7f81f6d@binero.se> Hello, Due to bugs and fixes that has been needed we are probably going to merge some changes to Puppet modules which will cause a refresh of their services meaning they will be restarted. If you are following the stable branches (stable/rocky in this case) and not using tagged releases when you are pulling in the Puppet OpenStack modules we want to alert you that restarts of services might happen if you deploy new changes. These two for example is bug fixes which are probably going to be restarted causing restart of Horizon and Cinder services [1] [2] [3]. Feel free to reach out to us at #puppet-openstack if you have any concerns. [1] https://review.openstack.org/#/c/608244/ [2] https://review.openstack.org/#/c/607964/ (if backported to Rocky later on) [3] https://review.openstack.org/#/c/605071/ Best regards Tobias From lbragstad at gmail.com Mon Oct 8 13:49:21 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 8 Oct 2018 08:49:21 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> Message-ID: On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann wrote: > ---- On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad < > lbragstad at gmail.com> wrote ---- > > > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki > wrote: > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > wrote: > > > > > > Ideally I would like to see it in the form of least specific to most > specific. But more importantly in a way that there is no additional > delimiters between the service type and the resource. Finally, I do not > like the change of plurality depending on action type. > > > > > > I propose we consider > > > > > > ::[:] > > > > > > Example for keystone (note, action names below are strictly examples > I am fine with whatever form those actions take): > > > identity:projects:create > > > identity:projects:delete > > > identity:projects:list > > > identity:projects:get > > > > > > It keeps things simple and consistent when you're looking through > overrides / defaults. > > > --Morgan > > +1 -- I think the ordering if `resource` comes before > > `action|subaction` will be more clean. > > > > ++ > > These are excellent points. I especially like being able to omit the > convention about plurality. Furthermore, I'd like to add that I think we > should make the resource singular (e.g., project instead or projects). For > example: > > compute:server:list > > > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize > (or confirm-resize) > > Do we need "action" word there? I think action name itself should convey > the operation. IMO below notation without "äction" word looks clear enough. > what you say? > > compute:server:reboot > compute:server:confirm_resize > I agree. I simplified this in the current version up for review. > > -gmann > > > > > Otherwise, someone might mistake compute:servers:get, as "list". This > is ultra-nick-picky, but something I thought of when seeing the usage of > "get_all" in policy names in favor of "list." > > In summary, the new convention based on the most recent feedback should > be: > > ::[:] > > Rules:service-type is always defined in the service types authority > > resources are always singular > > Thanks to all for sticking through this tedious discussion. I > appreciate it. > > /R > > > > Harry > > > > > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad > wrote: > > >> > > >> Bumping this thread again and proposing two conventions based on > the discussion here. I propose we decide on one of the two following > conventions: > > >> > > >> :: > > >> > > >> or > > >> > > >> :_ > > >> > > >> Where is the corresponding service type of the > project [0], and is either create, get, list, update, or delete. I > think decoupling the method from the policy name should aid in consistency, > regardless of the underlying implementation. The HTTP method specifics can > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > >> > > >> I think the plurality of the resource should default to what makes > sense for the operation being carried out (e.g., list:foobars, > create:foobar). > > >> > > >> I don't mind the first one because it's clear about what the > delimiter is and it doesn't look weird when projects have something like: > > >> > > >> ::: > > >> > > >> If folks are ok with this, I can start working on some > documentation that explains the motivation for this. Afterward, we can > figure out how we want to track this work. > > >> > > >> What color do you want the shed to be? > > >> > > >> [0] https://service-types.openstack.org/service-types.json > > >> [1] > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > > >> > > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad > wrote: > > >>> > > >>> > > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann < > gmann at ghanshyammann.com> wrote: > > >>>> > > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < > john at johngarbutt.com> wrote ---- > > >>>> > tl;dr+1 consistent names > > >>>> > I would make the names mirror the API... because the Operator > setting them knows the API, not the codeIgnore the crazy names in Nova, I > certainly hate them > > >>>> > > >>>> Big +1 on consistent naming which will help operator as well as > developer to maintain those. > > >>>> > > >>>> > > > >>>> > Lance Bragstad wrote: > > >>>> > > I'm curious if anyone has context on the "os-" part of the > format? > > >>>> > > > >>>> > My memory of the Nova policy mess...* Nova's policy rules > traditionally followed the patterns of the code > > >>>> > ** Yes, horrible, but it happened.* The code used to have the > OpenStack API and the EC2 API, hence the "os"* API used to expand with > extensions, so the policy name is often based on extensions** note most of > the extension code has now gone, including lots of related policies* Policy > in code was focused on getting us to a place where we could rename policy** > Whoop whoop by the way, it feels like we are really close to something > sensible now! > > >>>> > Lance Bragstad wrote: > > >>>> > Thoughts on using create, list, update, and delete as opposed > to post, get, put, patch, and delete in the naming convention? > > >>>> > I could go either way as I think about "list servers" in the > API.But my preference is for the URL stub and POST, GET, etc. > > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad < > lbragstad at gmail.com> wrote:If we consider dropping "os", should we > entertain dropping "api", too? Do we have a good reason to keep "api"?I > wouldn't be opposed to simple service types (e.g "compute" or > "loadbalancer"). > > >>>> > +1The API is known as "compute" in api-ref, so the policy > should be for "compute", etc. > > >>>> > > >>>> Agree on mapping the policy name with api-ref as much as > possible. Other than policy name having 'os-', we have 'os-' in resource > name also in nova API url like /os-agents, /os-aggregates etc (almost every > resource except servers , flavors). As we cannot get rid of those from API > url, we need to keep the same in policy naming too? or we can have policy > name like compute:agents:create/post but that mismatch from api-ref where > agents resource url is os-agents. > > >>> > > >>> > > >>> Good question. I think this depends on how the service does policy > enforcement. > > >>> > > >>> I know we did something like this in keystone, which required > policy names and method names to be the same: > > >>> > > >>> "identity:list_users": "..." > > >>> > > >>> Because the initial implementation of policy enforcement used a > decorator like this: > > >>> > > >>> from keystone import controller > > >>> > > >>> @controller.protected > > >>> def list_users(self): > > >>> ... > > >>> > > >>> Having the policy name the same as the method name made it easier > for the decorator implementation to resolve the policy needed to protect > the API because it just looked at the name of the wrapped method. The > advantage was that it was easy to implement new APIs because you only > needed to add a policy, implement the method, and make sure you decorate > the implementation. > > >>> > > >>> While this worked, we are moving away from it entirely. The > decorator implementation was ridiculously complicated. Only a handful of > keystone developers understood it. With the addition of system-scope, it > would have only become more convoluted. It also enables a much more > copy-paste pattern (e.g., so long as I wrap my method with this decorator > implementation, things should work right?). Instead, we're calling > enforcement within the controller implementation to ensure things are > easier to understand. It requires developers to be cognizant of how > different token types affect the resources within an API. That said, > coupling the policy name to the method name is no longer a requirement for > keystone. > > >>> > > >>> Hopefully, that helps explain why we needed them to match. > > >>> > > >>>> > > >>>> > > >>>> Also we have action API (i know from nova not sure from other > services) like POST /servers/{server_id}/action {addSecurityGroup} and > their current policy name is all inconsistent. few have policy name > including their resource name like > "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in > policy name like "os_compute_api:os-admin-actions:reset_state" and few has > direct action name like "os_compute_api:os-console-output" > > >>> > > >>> > > >>> Since the actions API relies on the request body and uses a single > HTTP method, does it make sense to have the HTTP method in the policy name? > It feels redundant, and we might be able to establish a convention that's > more meaningful for things like action APIs. It looks like cinder has a > similar pattern [0]. > > >>> > > >>> [0] > https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > > >>> > > >>>> > > >>>> > > >>>> May be we can make them consistent with > :: or any better opinion. > > >>>> > > >>>> > From: Lance Bragstad > The topic of > having consistent policy names has popped up a few times this week. > > >>>> > > > >>>> > I would love to have this nailed down before we go through all > the policy rules again. In my head I hope in Nova we can go through each > policy rule and do the following: > > >>>> > * move to new consistent policy name, deprecate existing name* > hardcode scope check to project, system or user** (user, yes... keypairs, > yuck, but its how they work)** deprecate in rule scope checks, which are > largely bogus in Nova anyway* make read/write/admin distinction** therefore > adding the "noop" role, amount other things > > >>>> > > >>>> + policy granularity. > > >>>> > > >>>> It is good idea to make the policy improvement all together and > for all rules as you mentioned. But my worries is how much load it will be > on operator side to migrate all policy rules at same time? What will be the > deprecation period etc which i think we can discuss on proposed spec - > https://review.openstack.org/#/c/547850 > > >>> > > >>> > > >>> Yeah, that's another valid concern. I know at least one operator > has weighed in already. I'm curious if operators have specific input here. > > >>> > > >>> It ultimately depends on if they override existing policies or > not. If a deployment doesn't have any overrides, it should be a relatively > simple change for operators to consume. > > >>> > > >>>> > > >>>> > > >>>> > > >>>> -gmann > > >>>> > > >>>> > Thanks,John > __________________________________________________________________________ > > >>>> > OpenStack Development Mailing List (not for usage questions) > > >>>> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >>>> > > > >>>> > > >>>> > > >>>> > > >>>> > __________________________________________________________________________ > > >>>> OpenStack Development Mailing List (not for usage questions) > > >>>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >> > > >> > __________________________________________________________________________ > > >> OpenStack Development Mailing List (not for usage questions) > > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Oct 8 14:28:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 8 Oct 2018 09:28:06 -0500 Subject: [Openstack-operators] [sean.mcginnis@gmx.com: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables] Message-ID: <20181008142806.GD17162@sm-workstation> Hello operators! I'm trying to reach out to more folks that this might impact. There are nitty gritty details below, but to summarize - the release team is looking at changing the way we currently do releases for the main service projects. So far, we have been doing three milestone releases throughout a development cycle, one or more release candidates, then the final release for the cycle. >From what we could tell, these milestone releases were really not being used by anyone (mostly) so to a degree, they were just busy work that we were imposing on the project teams. We are thinking of changing the release model to get rid of the milestones and just do the release candidates as we finalize the release. Those can be considered close to done for the purposes of those wanting to get an early start on testing deployments and doing any kind of packaging. The only problem we've seen is without the milestone releases during the cycle, there is quite a gap before the version numbers get incremented for anyone doing more frequent testing. We wanted to reach out the operator community since I know there are at least a few of you that do "roll your own" for deploying upstream code. If anyone knows this change would have a negative impact for you, please do let us know so we can take that into consideration or can make any tweaks to our plans. We want to reduce work put on the teams, but we don't want to do that at the expense of preventing others from being able to get their work done. Thanks! Sean ----- Forwarded message from Sean McGinnis ----- Date: Wed, 26 Sep 2018 09:22:30 -0500 From: Sean McGinnis To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables Reply-To: "OpenStack Development Mailing List (not for usage questions)" User-Agent: Mutt/1.10.1 (2018-07-13) During the Stein PTG in Denver, the release management team talked about ways we can make things simpler and reduce the "paper pushing" work that all teams need to do right now. One topic that came up was the usefulness of pushing tags around milestones during the cycle. There were a couple of needs identified for doing such "milestone releases": 1) It tests the release automation machinery to identify problems before the RC and final release crunch time. 2) It creates a nice cadence throughout the cycle to help teams stay on track and focus on the right things for each phase of the cycle. 3) It gives us an indication that teams are healthy, active, and planning to include their components in the final release. One of the big motivators in the past was also to have output that downstream distros and users could pick up for testing and early packaging. Based on our admittedly anecdotal small sample, it doesn't appear this is actually a big need, so we propose to stop tagging milestone releases for the cycle-with-milestone projects. We would still have "milestones" during the cycle to facilitate work organization and create a cadence: teams should still be aware of them, and we will continue to communicate those dates in the schedule and in the release countdown emails. But you would no longer be required to request a release for each milestone. Beta releases would be optional: if teams do want to have some beta version tags before the final release they can still request them - whether on one of the milestone dates, or whenever there is the need for the project. Release candidates would still require a tag. To facilitate that step and guarantee we have a release candidate for every deliverable, the release team proposes to automatically generate a release request early in the week of the RC deadline. That patch would be used as a base to communicate with the team: if a team wants to wait for a specific patch to make it to the RC, someone from the team can -1 the patch to have it held, or update that patch with a different commit SHA. If there are no issues, ideally we would want a +1 from the PTL and/or release liaison to indicate approval, but we would also consider no negative feedback as an indicator that the automatically proposed patches without a -1 can all be approved at the end of the RC deadline week. To cover point (3) above, and clearly know that a project is healthy and should be included in the coordinated release, we are thinking of requiring a person for each team to add their name to a "manifest" of sorts for the release cycle. That "final release liaison" person would be the designated person to follow through on finishing out the releases for that team, and would be designated ahead of the final release phases. With all these changes, we would rename the cycle-with-milestones release model to something like cycle-with-rc. FAQ: Q: Does this mean I don't need to pay attention to releases any more and the release team will just take care of everything? A: No. We still want teams engaged in the release cycle and would feel much more comfortable if we get an explicit +1 from the team on any proposed tags or releases. Q: Who should sign up to be the final release liaison ? A: Anyone in the team really. Could be the PTL, the standing release liaison, or someone else stepping up to cover that role. -- Thanks! The Release Team __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ----- End forwarded message ----- From doug at doughellmann.com Mon Oct 8 18:13:46 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Oct 2018 14:13:46 -0400 Subject: [Openstack-operators] [sean.mcginnis@gmx.com: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables] In-Reply-To: <20181008142806.GD17162@sm-workstation> References: <20181008142806.GD17162@sm-workstation> Message-ID: Sean McGinnis writes: [snip] > The only problem we've seen is without the milestone releases during the cycle, > there is quite a gap before the version numbers get incremented for anyone > doing more frequent testing. We wanted to reach out the operator community > since I know there are at least a few of you that do "roll your own" for > deploying upstream code. [snip] This specifically affects folks doing *upgrade* testing from the stable branch to pre-release versions on master, since during this period of time the version on master appears lower than the version on the stable branch. Doug From majopela at redhat.com Tue Oct 9 07:17:35 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Tue, 9 Oct 2018 09:17:35 +0200 Subject: [Openstack-operators] [SIGS] Ops Tools SIG Message-ID: Hello Yesterday, during the Oslo meeting we discussed [6] the possibility of creating a new Special Interest Group [1][2] to provide home and release means for operator related tools [3] [4] [5] I continued the discussion with M.Hillsman later, and he made me aware of the operator working group and mailing list, which existed even before the SIGs. I believe it could be a very good idea, to give life and more visibility to all those very useful tools (for example, I didn't know some of them existed ...). Give this, I have two questions: 1) Do you know or more tools which could find home under an Ops Tools SIG umbrella? 2) Do you want to join us? Best regards and have a great day. [1] https://governance.openstack.org/sigs/ [2] http://git.openstack.org/cgit/openstack/governance-sigs/tree/sigs.yaml [3] https://wiki.openstack.org/wiki/Osops [4] http://git.openstack.org/cgit/openstack/ospurge/tree/ [5] http://git.openstack.org/cgit/openstack/os-log-merger/tree/ [6] http://eavesdrop.openstack.org/meetings/oslo/2018/oslo.2018-10-08-15.00.log.html#l-130 -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Oct 9 15:35:09 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 9 Oct 2018 17:35:09 +0200 Subject: [Openstack-operators] [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> Message-ID: > Shit, I forgot to add openstack-operators at ... > Operators, see my question for you here : > > >> Le mar. 9 oct. 2018 à 16:39, Eric Fried a écrit : >> >>> IIUC, the primary thing the force flag was intended to do - allow an >>> instance to land on the requested destination even if that means >>> oversubscription of the host's resources - doesn't happen anymore since >>> we started making the destination claim in placement. >>> >>> IOW, since pike, you don't actually see a difference in behavior by >>> using the force flag or not. (If you do, it's more likely a bug than >>> what you were expecting.) >>> >>> So there's no reason to keep it around. We can remove it in a new >>> microversion (or not); but even in the current microversion we need not >>> continue making convoluted attempts to observe it. >>> >>> What that means is that we should simplify everything down to ignore the >>> force flag and always call GET /a_c. Problem solved - for nested and/or >>> sharing, NUMA or not, root resources or no, on the source and/or >>> destination. >>> >>> >> >> While I tend to agree with Eric here (and I commented on the review >> accordingly by saying we should signal the new behaviour by a >> microversion), I still think we need to properly advertise this, adding >> openstack-operators@ accordingly. >> Disclaimer : since we have gaps on OSC, the current OSC behaviour when >> you "openstack server live-migrate " is to *force* the destination >> by not calling the scheduler. Yeah, it sucks. >> >> Operators, what are the exact cases (for those running clouds newer than >> Mitaka, ie. Newton and above) when you make use of the --force option for >> live migration with a microversion newer or equal 2.29 ? >> In general, even in the case of an emergency, you still want to make sure >> you don't throw your compute under the bus by massively migrating instances >> that would create an undetected snowball effect by having this compute >> refusing new instances. Or are you disabling the target compute service >> first and throw your pet instances up there ? >> >> -Sylvain >> >> >> >> -efried >>> >>> On 10/09/2018 04:40 AM, Balázs Gibizer wrote: >>> > Hi, >>> > >>> > Setup >>> > ----- >>> > >>> > nested allocation: an allocation that contains resources from one or >>> > more nested RPs. (if you have better term for this then please >>> suggest). >>> > >>> > If an instance has nested allocation it means that the compute, it >>> > allocates from, has a nested RP tree. BUT if a compute has a nested RP >>> > tree it does not automatically means that the instance, allocating >>> from >>> > that compute, has a nested allocation (e.g. bandwidth inventory will >>> be >>> > on a nested RPs but not every instance will require bandwidth) >>> > >>> > Afaiu, as soon as we have NUMA modelling in place the most trivial >>> > servers will have nested allocations as CPU and MEMORY inverntory will >>> > be moved to the nested NUMA RPs. But NUMA is still in the future. >>> > >>> > Sidenote: there is an edge case reported by bauzas when an instance >>> > allocates _only_ from nested RPs. This was discussed on last Friday >>> and >>> > it resulted in a new patch[0] but I would like to keep that discussion >>> > separate from this if possible. >>> > >>> > Sidenote: the current problem somewhat related to not just nested PRs >>> > but to sharing RPs as well. However I'm not aiming to implement >>> sharing >>> > support in Nova right now so I also try to keep the sharing >>> disscussion >>> > separated if possible. >>> > >>> > There was already some discussion on the Monday's scheduler meeting >>> but >>> > I could not attend. >>> > >>> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 >>> > >>> > >>> > The meat >>> > -------- >>> > >>> > Both live-migrate[1] and evacuate[2] has an optional force flag on the >>> > nova REST API. The documentation says: "Force by not >>> > verifying the provided destination host by the scheduler." >>> > >>> > Nova implements this statement by not calling the scheduler if >>> > force=True BUT still try to manage allocations in placement. >>> > >>> > To have allocation on the destination host Nova blindly copies the >>> > instance allocation from the source host to the destination host >>> during >>> > these operations. Nova can do that as 1) the whole allocation is >>> > against a single RP (the compute RP) and 2) Nova knows both the source >>> > compute RP and the destination compute RP. >>> > >>> > However as soon as we bring nested allocations into the picture that >>> > blind copy will not be feasible. Possible cases >>> > 0) The instance has non-nested allocation on the source and would need >>> > non nested allocation on the destination. This works with blindy copy >>> > today. >>> > 1) The instance has a nested allocation on the source and would need a >>> > nested allocation on the destination as well. >>> > 2) The instance has a non-nested allocation on the source and would >>> > need a nested allocation on the destination. >>> > 3) The instance has a nested allocation on the source and would need a >>> > non nested allocation on the destination. >>> > >>> > Nova cannot generate nested allocations easily without reimplementing >>> > some of the placement allocation candidate (a_c) code. However I don't >>> > like the idea of duplicating some of the a_c code in Nova. >>> > >>> > Nova cannot detect what kind of allocation (nested or non-nested) an >>> > instance would need on the destination without calling placement a_c. >>> > So knowing when to call placement is a chicken and egg problem. >>> > >>> > Possible solutions: >>> > A) fail fast >>> > ------------ >>> > 0) Nova can detect that the source allocatioin is non-nested and try >>> > the blindy copy and it will succeed. >>> > 1) Nova can detect that the source allocaton is nested and fail the >>> > operation >>> > 2) Nova only sees a non nested source allocation. Even if the dest RP >>> > tree is nested it does not mean that the allocation will be nested. We >>> > cannot fail fast. Nova can try the blind copy and allocate every >>> > resources from the root RP of the destination. If the instance require >>> > nested allocation instead the claim will fail in placement. So nova >>> can >>> > fail the operation a bit later than in 1). >>> > 3) Nova can detect that the source allocation is nested and fail the >>> > operation. However and enhanced blind copy that tries to allocation >>> > everything from the root RP on the destinaton would have worked. >>> > >>> > B) Guess when to ignore the force flag and call the scheduler >>> > ------------------------------------------------------------- >>> > 0) keep the blind copy as it works >>> > 1) Nova detect that the source allocation is nested. Ignores the force >>> > flag and calls the scheduler that will call placement a_c. Move >>> > operation can succeed. >>> > 2) Nova only sees a non nested source allocation so it will fall back >>> > to blind copy and fails at the claim on destination. >>> > 3) Nova detect that the source allocation is nested. Ignores the force >>> > flag and calls the scheduler that will call placement a_c. Move >>> > operation can succeed. >>> > >>> > This solution would be against the API doc that states nova does not >>> > call the scheduler if the operation is forced. However in case of >>> force >>> > live-migration Nova already verifies the target host from couple of >>> > perspective in [3]. >>> > This solution is alreay proposed for live-migrate in [4] and for >>> > evacuate in [5] so the complexity of the solution can be seen in the >>> > reviews. >>> > >>> > C) Remove the force flag from the API in a new microversion >>> > ----------------------------------------------------------- >>> > 0)-3): all cases would call the scheduler to verify the target host >>> and >>> > generate the nested (or non-nested) allocation. >>> > We would still need an agreed behavior (from A), B), D)) for the old >>> > microversions as the todays code creates inconsistent allocation in >>> #1) >>> > and #3) by ignoring the resource from the nested RP. >>> > >>> > D) Do not manage allocations in placement for forced operation >>> > -------------------------------------------------------------- >>> > Force flag is considered as a last resort tool for the admin to move >>> > VMs around. The API doc has a fat warning about the danger of it. So >>> > Nova can simply ignore resource allocation task if force=True. Nova >>> > would delete the source allocation and does not create any allocation >>> > on the destination host. >>> > >>> > This is a simple but dangerous solution but it is what the force flag >>> > is all about, move the server against all the built in safeties. (If >>> > the admin needs the safeties she can set force=False and still specify >>> > the destination host) >>> > >>> > I'm open to any suggestions. >>> > >>> > Cheers, >>> > gibi >>> > >>> > [0] https://review.openstack.org/#/c/608298/ >>> > [1] >>> > >>> https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action >>> > [2] >>> > >>> https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action >>> > [3] >>> > >>> https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 >>> > [4] https://review.openstack.org/#/c/605785 >>> > [5] https://review.openstack.org/#/c/606111 >>> > >>> > >>> > >>> __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Oct 10 14:07:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 10 Oct 2018 09:07:58 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <1539097728.11166.8@smtp.office365.com> References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> <1539097728.11166.8@smtp.office365.com> Message-ID: <9d8fb467-71d9-74ce-2d55-5bbc0137a26f@gmail.com> On 10/9/2018 10:08 AM, Balázs Gibizer wrote: > Question for you as well: if we remove (or change) the force flag in a > new microversion then how should the old microversions behave when > nested allocations would be required? Fail fast if we can detect we have nested. We don't support forcing those types of servers. -- Thanks, Matt From mriedemos at gmail.com Wed Oct 10 14:14:11 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 10 Oct 2018 09:14:11 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <3757e85e-87f2-662c-8bbc-d24ed4b88299@gmail.com> References: <1539078021.11166.5@smtp.office365.com> <1539167549.7850.2@smtp.office365.com> <3757e85e-87f2-662c-8bbc-d24ed4b88299@gmail.com> Message-ID: <10626f80-278b-2cce-bee1-76a738e482c9@gmail.com> On 10/10/2018 7:46 AM, Jay Pipes wrote: >> 2) in the old microversions change the blind allocation copy to gather >> every resource from a nested source RPs too and try to allocate that >> from the destination root RP. In nested allocation cases putting this >> allocation to placement will fail and nova will fail the migration / >> evacuation. However it will succeed if the server does not need nested >> allocation neither on the source nor on the destination host (a.k.a the >> legacy case). Or if the server has nested allocation on the source host >> but does not need nested allocation on the destination host (for >> example the dest host does not have nested RP tree yet). > > I disagree on this. I'd rather just do a simple check for >1 provider in > the allocations on the source and if True, fail hard. > > The reverse (going from a non-nested source to a nested destination) > will hard fail anyway on the destination because the POST /allocations > won't work due to capacity exceeded (or failure to have any inventory at > all for certain resource classes on the destination's root compute node). I agree with Jay here. If we know the source has allocations on >1 provider, just fail fast, why even walk the tree and try to claim those against the destination - the nested providers aren't going to be the same UUIDs on the destination, *and* trying to squash all of the source nested allocations into the single destination root provider and hope it works is super hacky and I don't think we should attempt that. Just fail if being forced and nested allocations exist on the source. -- Thanks, Matt From dms at danplanet.com Wed Oct 10 15:58:20 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 10 Oct 2018 08:58:20 -0700 Subject: [Openstack-operators] [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <10626f80-278b-2cce-bee1-76a738e482c9@gmail.com> (Matt Riedemann's message of "Wed, 10 Oct 2018 09:14:11 -0500") References: <1539078021.11166.5@smtp.office365.com> <1539167549.7850.2@smtp.office365.com> <3757e85e-87f2-662c-8bbc-d24ed4b88299@gmail.com> <10626f80-278b-2cce-bee1-76a738e482c9@gmail.com> Message-ID: >> I disagree on this. I'd rather just do a simple check for >1 >> provider in the allocations on the source and if True, fail hard. >> >> The reverse (going from a non-nested source to a nested destination) >> will hard fail anyway on the destination because the POST >> /allocations won't work due to capacity exceeded (or failure to have >> any inventory at all for certain resource classes on the >> destination's root compute node). > > I agree with Jay here. If we know the source has allocations on >1 > provider, just fail fast, why even walk the tree and try to claim > those against the destination - the nested providers aren't going to > be the same UUIDs on the destination, *and* trying to squash all of > the source nested allocations into the single destination root > provider and hope it works is super hacky and I don't think we should > attempt that. Just fail if being forced and nested allocations exist > on the source. Same, yeah. --Dan From tobias.rydberg at citynetwork.eu Thu Oct 11 10:56:27 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 11 Oct 2018 12:56:27 +0200 Subject: [Openstack-operators] [publiccloud-wg] Todays meeting for Public Cloud WG CANCELLED Message-ID: <52c2f8d3-b456-2f45-7967-6bfe207df469@citynetwork.eu> Hi folks, Unfortunate we need to cancel todays meeting! Talk to you next Wednesday at 0700 UTC. Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From majopela at redhat.com Thu Oct 11 14:19:38 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Thu, 11 Oct 2018 16:19:38 +0200 Subject: [Openstack-operators] [SIGS] Ops Tools SIG In-Reply-To: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> References: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> Message-ID: Adding the mailing lists back to your reply, thank you :) I guess that +melvin.hillsman at huawei.com can help us a little bit organizing the SIG, but I guess the first thing would be collecting a list of tools which could be published under the umbrella of the SIG, starting by the ones already in Osops. Publishing documentation for those tools, and the catalog under docs.openstack.org is possibly the next step (or a parallel step). On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister wrote: > Hi Miguel, > > I would love to join this. What do I need to do? > > Sent from my iPhone > > On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo > wrote: > > Hello > > Yesterday, during the Oslo meeting we discussed [6] the possibility of > creating a new Special Interest Group [1][2] to provide home and release > means for operator related tools [3] [4] [5] > > I continued the discussion with M.Hillsman later, and he made me aware > of the operator working group and mailing list, which existed even before > the SIGs. > > I believe it could be a very good idea, to give life and more > visibility to all those very useful tools (for example, I didn't know some > of them existed ...). > > Give this, I have two questions: > > 1) Do you know or more tools which could find home under an Ops Tools > SIG umbrella? > > 2) Do you want to join us? > > > Best regards and have a great day. > > > [1] https://governance.openstack.org/sigs/ > [2] http://git.openstack.org/cgit/openstack/governance-sigs/tree/sigs.yaml > [3] https://wiki.openstack.org/wiki/Osops > [4] http://git.openstack.org/cgit/openstack/ospurge/tree/ > [5] http://git.openstack.org/cgit/openstack/os-log-merger/tree/ > [6] > http://eavesdrop.openstack.org/meetings/oslo/2018/oslo.2018-10-08-15.00.log.html#l-130 > > > > -- > Miguel Ángel Ajo > OSP / Networking DFG, OVN Squad Engineering > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Oct 11 19:25:04 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 11 Oct 2018 14:25:04 -0500 Subject: [Openstack-operators] [oslo] Config Validator Message-ID: <98a77026-c3fc-65ba-ac65-0dee6d687e23@nemebean.com> Hi, We recently merged a new feature to oslo.config and it was suggested that I publicize it since it addresses a longstanding pain point. It's a validator tool[1] that will warn or error on any entries in a config file that aren't defined in the service or are deprecated. Previously this was difficult to do accurately because config opts are registered at runtime and you don't know for sure when all of the opts are present. This tool makes use of the less recently added machine-readable sample config[2], which should contain all of the available opts for a service. If any are missing, that is a bug and should be addressed in the service anyway. This is the same data used to generate sample config files and those should have all of the possible opts listed. The one limitation I'm aware of at this point is that dynamic groups aren't handled, so options in a dynamic group will be reported as missing even though they are recognized by the service. This should be solvable, but for the moment it is a limitation to keep in mind. So if this is something you were interested in, please try it out and let us know how it works for you. The latest release of oslo.config on pypi should have this tool, and since it doesn't necessarily have to be run on the live system you can install the bleeding edge oslo.config somewhere else and just generate the machine readable sample config from the production system. That functionality has been in oslo.config for a few cycles now so it's more likely to be available. Thanks. -Ben 1: https://docs.openstack.org/oslo.config/latest/cli/validator.html 2: https://docs.openstack.org/oslo.config/latest/cli/generator.html#machine-readable-configs From mark at stackhpc.com Fri Oct 12 10:49:08 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 12 Oct 2018 11:49:08 +0100 Subject: [Openstack-operators] [kayobe][kolla] Announcing the release of Kayobe 4.0.0 Message-ID: Hi, Announcing the release of Kayobe 4.0.0. This release includes support for the Queens release of OpenStack, and is the first release of Kayobe built using the OpenStack infrastructure. Release notes: https://kayobe-release-notes.readthedocs.io/en/latest/queens.html#relnotes-4-0-0-stable-queens Documentation: https://kayobe.readthedocs.io Thanks to everyone who contributed to this release! Looking forward, we intend to catch up with the OpenStack release cycle, by making a smaller release with support for OpenStack Rocky, then moving straight onto Stein. Cheers, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Oct 12 12:21:00 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 12 Oct 2018 07:21:00 -0500 Subject: [Openstack-operators] [openstack-dev] [SIGS] Ops Tools SIG In-Reply-To: References: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> Message-ID: <20181012122059.GB3532@sm-xps> On Fri, Oct 12, 2018 at 11:25:20AM +0200, Martin Magr wrote: > Greetings guys, > > On Thu, Oct 11, 2018 at 4:19 PM, Miguel Angel Ajo Pelayo < > majopela at redhat.com> wrote: > > > Adding the mailing lists back to your reply, thank you :) > > > > I guess that +melvin.hillsman at huawei.com can > > help us a little bit organizing the SIG, > > but I guess the first thing would be collecting a list of tools which > > could be published > > under the umbrella of the SIG, starting by the ones already in Osops. > > > > Publishing documentation for those tools, and the catalog under > > docs.openstack.org > > is possibly the next step (or a parallel step). > > > > > > On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister > > wrote: > > > >> Hi Miguel, > >> > >> I would love to join this. What do I need to do? > >> > >> Sent from my iPhone > >> > >> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo > >> wrote: > >> > >> Hello > >> > >> Yesterday, during the Oslo meeting we discussed [6] the possibility > >> of creating a new Special Interest Group [1][2] to provide home and release > >> means for operator related tools [3] [4] [5] > >> > >> > all of those tools have python dependencies related to openstack such as > python-openstackclient or python-pbr. Which is exactly the reason why we > moved osops-tools-monitoring-oschecks packaging away from OpsTools SIG to > Cloud SIG. AFAIR we had some issues of having opstools SIG being dependent > on openstack SIG. I believe that Cloud SIG is proper home for tools like > [3][4][5] as they are related to OpenStack anyway. OpsTools SIG contains > general tools like fluentd, sensu, collectd. > > > Hope this helps, > Martin > Hey Martin, I'm not sure I understand the issue with these tools have dependencies on other packages and the relationship to SIG ownership. Is your concern (or the history of a concern you are pointing out) that the tools would have a more difficult time if they required updates to dependencies if they are owned by a different group? Thanks! Sean From lbragstad at gmail.com Fri Oct 12 16:45:17 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 12 Oct 2018 11:45:17 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> Message-ID: Sending a follow up here quick. The reviewers actively participating in [0] are nearing a conclusion. Ultimately, the convention is going to be: :[:][:]:[:] Details about what that actually means can be found in the review [0]. Each piece is denoted as being required or optional, along with examples. I think this gives us a pretty good starting place, and the syntax is flexible enough to support almost every policy naming convention we've stumbled across. Now is the time if you have any final input or feedback. Thanks for sticking with the discussion. Lance [0] https://review.openstack.org/#/c/606214/ On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad wrote: > > On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann > wrote: > >> ---- On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad < >> lbragstad at gmail.com> wrote ---- >> > >> > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki >> wrote: >> > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg >> > wrote: >> > > >> > > Ideally I would like to see it in the form of least specific to >> most specific. But more importantly in a way that there is no additional >> delimiters between the service type and the resource. Finally, I do not >> like the change of plurality depending on action type. >> > > >> > > I propose we consider >> > > >> > > ::[:] >> > > >> > > Example for keystone (note, action names below are strictly >> examples I am fine with whatever form those actions take): >> > > identity:projects:create >> > > identity:projects:delete >> > > identity:projects:list >> > > identity:projects:get >> > > >> > > It keeps things simple and consistent when you're looking through >> overrides / defaults. >> > > --Morgan >> > +1 -- I think the ordering if `resource` comes before >> > `action|subaction` will be more clean. >> > >> > ++ >> > These are excellent points. I especially like being able to omit the >> convention about plurality. Furthermore, I'd like to add that I think we >> should make the resource singular (e.g., project instead or projects). For >> example: >> > compute:server:list >> > >> compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize >> (or confirm-resize) >> >> Do we need "action" word there? I think action name itself should convey >> the operation. IMO below notation without "äction" word looks clear enough. >> what you say? >> >> compute:server:reboot >> compute:server:confirm_resize >> > > I agree. I simplified this in the current version up for review. > > >> >> -gmann >> >> > >> > Otherwise, someone might mistake compute:servers:get, as "list". This >> is ultra-nick-picky, but something I thought of when seeing the usage of >> "get_all" in policy names in favor of "list." >> > In summary, the new convention based on the most recent feedback >> should be: >> > ::[:] >> > Rules:service-type is always defined in the service types authority >> > resources are always singular >> > Thanks to all for sticking through this tedious discussion. I >> appreciate it. >> > /R >> > >> > Harry >> > > >> > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad >> wrote: >> > >> >> > >> Bumping this thread again and proposing two conventions based on >> the discussion here. I propose we decide on one of the two following >> conventions: >> > >> >> > >> :: >> > >> >> > >> or >> > >> >> > >> :_ >> > >> >> > >> Where is the corresponding service type of the >> project [0], and is either create, get, list, update, or delete. I >> think decoupling the method from the policy name should aid in consistency, >> regardless of the underlying implementation. The HTTP method specifics can >> still be relayed using oslo.policy's DocumentedRuleDefault object [1]. >> > >> >> > >> I think the plurality of the resource should default to what makes >> sense for the operation being carried out (e.g., list:foobars, >> create:foobar). >> > >> >> > >> I don't mind the first one because it's clear about what the >> delimiter is and it doesn't look weird when projects have something like: >> > >> >> > >> ::: >> > >> >> > >> If folks are ok with this, I can start working on some >> documentation that explains the motivation for this. Afterward, we can >> figure out how we want to track this work. >> > >> >> > >> What color do you want the shed to be? >> > >> >> > >> [0] https://service-types.openstack.org/service-types.json >> > >> [1] >> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule >> > >> >> > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad < >> lbragstad at gmail.com> wrote: >> > >>> >> > >>> >> > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann < >> gmann at ghanshyammann.com> wrote: >> > >>>> >> > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < >> john at johngarbutt.com> wrote ---- >> > >>>> > tl;dr+1 consistent names >> > >>>> > I would make the names mirror the API... because the Operator >> setting them knows the API, not the codeIgnore the crazy names in Nova, I >> certainly hate them >> > >>>> >> > >>>> Big +1 on consistent naming which will help operator as well as >> developer to maintain those. >> > >>>> >> > >>>> > >> > >>>> > Lance Bragstad wrote: >> > >>>> > > I'm curious if anyone has context on the "os-" part of the >> format? >> > >>>> > >> > >>>> > My memory of the Nova policy mess...* Nova's policy rules >> traditionally followed the patterns of the code >> > >>>> > ** Yes, horrible, but it happened.* The code used to have the >> OpenStack API and the EC2 API, hence the "os"* API used to expand with >> extensions, so the policy name is often based on extensions** note most of >> the extension code has now gone, including lots of related policies* Policy >> in code was focused on getting us to a place where we could rename policy** >> Whoop whoop by the way, it feels like we are really close to something >> sensible now! >> > >>>> > Lance Bragstad wrote: >> > >>>> > Thoughts on using create, list, update, and delete as opposed >> to post, get, put, patch, and delete in the naming convention? >> > >>>> > I could go either way as I think about "list servers" in the >> API.But my preference is for the URL stub and POST, GET, etc. >> > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad < >> lbragstad at gmail.com> wrote:If we consider dropping "os", should we >> entertain dropping "api", too? Do we have a good reason to keep "api"?I >> wouldn't be opposed to simple service types (e.g "compute" or >> "loadbalancer"). >> > >>>> > +1The API is known as "compute" in api-ref, so the policy >> should be for "compute", etc. >> > >>>> >> > >>>> Agree on mapping the policy name with api-ref as much as >> possible. Other than policy name having 'os-', we have 'os-' in resource >> name also in nova API url like /os-agents, /os-aggregates etc (almost every >> resource except servers , flavors). As we cannot get rid of those from API >> url, we need to keep the same in policy naming too? or we can have policy >> name like compute:agents:create/post but that mismatch from api-ref where >> agents resource url is os-agents. >> > >>> >> > >>> >> > >>> Good question. I think this depends on how the service does >> policy enforcement. >> > >>> >> > >>> I know we did something like this in keystone, which required >> policy names and method names to be the same: >> > >>> >> > >>> "identity:list_users": "..." >> > >>> >> > >>> Because the initial implementation of policy enforcement used a >> decorator like this: >> > >>> >> > >>> from keystone import controller >> > >>> >> > >>> @controller.protected >> > >>> def list_users(self): >> > >>> ... >> > >>> >> > >>> Having the policy name the same as the method name made it easier >> for the decorator implementation to resolve the policy needed to protect >> the API because it just looked at the name of the wrapped method. The >> advantage was that it was easy to implement new APIs because you only >> needed to add a policy, implement the method, and make sure you decorate >> the implementation. >> > >>> >> > >>> While this worked, we are moving away from it entirely. The >> decorator implementation was ridiculously complicated. Only a handful of >> keystone developers understood it. With the addition of system-scope, it >> would have only become more convoluted. It also enables a much more >> copy-paste pattern (e.g., so long as I wrap my method with this decorator >> implementation, things should work right?). Instead, we're calling >> enforcement within the controller implementation to ensure things are >> easier to understand. It requires developers to be cognizant of how >> different token types affect the resources within an API. That said, >> coupling the policy name to the method name is no longer a requirement for >> keystone. >> > >>> >> > >>> Hopefully, that helps explain why we needed them to match. >> > >>> >> > >>>> >> > >>>> >> > >>>> Also we have action API (i know from nova not sure from other >> services) like POST /servers/{server_id}/action {addSecurityGroup} and >> their current policy name is all inconsistent. few have policy name >> including their resource name like >> "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in >> policy name like "os_compute_api:os-admin-actions:reset_state" and few has >> direct action name like "os_compute_api:os-console-output" >> > >>> >> > >>> >> > >>> Since the actions API relies on the request body and uses a >> single HTTP method, does it make sense to have the HTTP method in the >> policy name? It feels redundant, and we might be able to establish a >> convention that's more meaningful for things like action APIs. It looks >> like cinder has a similar pattern [0]. >> > >>> >> > >>> [0] >> https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action >> > >>> >> > >>>> >> > >>>> >> > >>>> May be we can make them consistent with >> :: or any better opinion. >> > >>>> >> > >>>> > From: Lance Bragstad > The topic of >> having consistent policy names has popped up a few times this week. >> > >>>> > >> > >>>> > I would love to have this nailed down before we go through >> all the policy rules again. In my head I hope in Nova we can go through >> each policy rule and do the following: >> > >>>> > * move to new consistent policy name, deprecate existing >> name* hardcode scope check to project, system or user** (user, yes... >> keypairs, yuck, but its how they work)** deprecate in rule scope checks, >> which are largely bogus in Nova anyway* make read/write/admin distinction** >> therefore adding the "noop" role, amount other things >> > >>>> >> > >>>> + policy granularity. >> > >>>> >> > >>>> It is good idea to make the policy improvement all together and >> for all rules as you mentioned. But my worries is how much load it will be >> on operator side to migrate all policy rules at same time? What will be the >> deprecation period etc which i think we can discuss on proposed spec - >> https://review.openstack.org/#/c/547850 >> > >>> >> > >>> >> > >>> Yeah, that's another valid concern. I know at least one operator >> has weighed in already. I'm curious if operators have specific input here. >> > >>> >> > >>> It ultimately depends on if they override existing policies or >> not. If a deployment doesn't have any overrides, it should be a relatively >> simple change for operators to consume. >> > >>> >> > >>>> >> > >>>> >> > >>>> >> > >>>> -gmann >> > >>>> >> > >>>> > Thanks,John >> __________________________________________________________________________ >> > >>>> > OpenStack Development Mailing List (not for usage questions) >> > >>>> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > >>>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >>>> > >> > >>>> >> > >>>> >> > >>>> >> > >>>> >> __________________________________________________________________________ >> > >>>> OpenStack Development Mailing List (not for usage questions) >> > >>>> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> > >> >> __________________________________________________________________________ >> > >> OpenStack Development Mailing List (not for usage questions) >> > >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > >> > > >> __________________________________________________________________________ >> > > OpenStack Development Mailing List (not for usage questions) >> > > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Oct 12 22:05:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 12 Oct 2018 17:05:53 -0500 Subject: [Openstack-operators] [goals][upgrade-checkers] Week R-26 Update Message-ID: The big update this week is version 0.1.0 of oslo.upgradecheck was released. The documentation along with usage examples can be found here [1]. A big thanks to Ben Nemec for getting that done since a few projects were waiting for it. In other updates, some changes were proposed in other projects [2]. And finally, Lance Bragstad and I had a discussion this week [3] about the validity of upgrade checks looking for deleted configuration options. The main scenario I'm thinking about here is FFU where someone is going from Mitaka to Pike. Let's say a config option was deprecated in Newton and then removed in Ocata. As the operator is rolling through from Mitaka to Pike, they might have missed the deprecation signal in Newton and removal in Ocata. Does that mean we should have upgrade checks that look at the configuration for deleted options, or options where the deprecated alias is removed? My thought is that if things will not work once they get to the target release and restart the service code, which would definitely impact the upgrade, then checking for those scenarios is probably OK. If on the other hand the removed options were just tied to functionality that was removed and are otherwise not causing any harm then I don't think we need a check for that. It was noted that oslo.config has a new validation tool [4] so that would take care of some of this same work if run during upgrades. So I think whether or not an upgrade check should be looking for config option removal ultimately depends on the severity of what happens if the manual intervention to handle that removed option is not performed. That's pretty broad, but these upgrade checks aren't really set in stone for what is applied to them. I'd like to get input from others on this, especially operators and if they would find these types of checks useful. [1] https://docs.openstack.org/oslo.upgradecheck/latest/ [2] https://storyboard.openstack.org/#!/story/2003657 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17 [4] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html -- Thanks, Matt From gmann at ghanshyammann.com Sat Oct 13 11:07:10 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 13 Oct 2018 20:07:10 +0900 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> Message-ID: <1666d1bcecf.e634f9cf181694.2527311199687749309@ghanshyammann.com> ---- On Sat, 13 Oct 2018 01:45:17 +0900 Lance Bragstad wrote ---- > Sending a follow up here quick. > The reviewers actively participating in [0] are nearing a conclusion. Ultimately, the convention is going to be: > :[:][:]:[:] > Details about what that actually means can be found in the review [0]. Each piece is denoted as being required or optional, along with examples. I think this gives us a pretty good starting place, and the syntax is flexible enough to support almost every policy naming convention we've stumbled across. > Now is the time if you have any final input or feedback. Thanks for sticking with the discussion. Thanks Lance for working on this. Current version lgtm. I would like to see some operators feedback also if this standard policy name format is clear and easy understandable. -gmann > Lance > [0] https://review.openstack.org/#/c/606214/ > > On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad wrote: > > On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann wrote: > ---- On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad wrote ---- > > > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > wrote: > > > > > > Ideally I would like to see it in the form of least specific to most specific. But more importantly in a way that there is no additional delimiters between the service type and the resource. Finally, I do not like the change of plurality depending on action type. > > > > > > I propose we consider > > > > > > ::[:] > > > > > > Example for keystone (note, action names below are strictly examples I am fine with whatever form those actions take): > > > identity:projects:create > > > identity:projects:delete > > > identity:projects:list > > > identity:projects:get > > > > > > It keeps things simple and consistent when you're looking through overrides / defaults. > > > --Morgan > > +1 -- I think the ordering if `resource` comes before > > `action|subaction` will be more clean. > > > > ++ > > These are excellent points. I especially like being able to omit the convention about plurality. Furthermore, I'd like to add that I think we should make the resource singular (e.g., project instead or projects). For example: > > compute:server:list > > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize (or confirm-resize) > > Do we need "action" word there? I think action name itself should convey the operation. IMO below notation without "äction" word looks clear enough. what you say? > > compute:server:reboot > compute:server:confirm_resize > > I agree. I simplified this in the current version up for review. > -gmann > > > > > Otherwise, someone might mistake compute:servers:get, as "list". This is ultra-nick-picky, but something I thought of when seeing the usage of "get_all" in policy names in favor of "list." > > In summary, the new convention based on the most recent feedback should be: > > ::[:] > > Rules:service-type is always defined in the service types authority > > resources are always singular > > Thanks to all for sticking through this tedious discussion. I appreciate it. > > /R > > > > Harry > > > > > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad wrote: > > >> > > >> Bumping this thread again and proposing two conventions based on the discussion here. I propose we decide on one of the two following conventions: > > >> > > >> :: > > >> > > >> or > > >> > > >> :_ > > >> > > >> Where is the corresponding service type of the project [0], and is either create, get, list, update, or delete. I think decoupling the method from the policy name should aid in consistency, regardless of the underlying implementation. The HTTP method specifics can still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > >> > > >> I think the plurality of the resource should default to what makes sense for the operation being carried out (e.g., list:foobars, create:foobar). > > >> > > >> I don't mind the first one because it's clear about what the delimiter is and it doesn't look weird when projects have something like: > > >> > > >> ::: > > >> > > >> If folks are ok with this, I can start working on some documentation that explains the motivation for this. Afterward, we can figure out how we want to track this work. > > >> > > >> What color do you want the shed to be? > > >> > > >> [0] https://service-types.openstack.org/service-types.json > > >> [1] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > > >> > > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad wrote: > > >>> > > >>> > > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann wrote: > > >>>> > > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt wrote ---- > > >>>> > tl;dr+1 consistent names > > >>>> > I would make the names mirror the API... because the Operator setting them knows the API, not the codeIgnore the crazy names in Nova, I certainly hate them > > >>>> > > >>>> Big +1 on consistent naming which will help operator as well as developer to maintain those. > > >>>> > > >>>> > > > >>>> > Lance Bragstad wrote: > > >>>> > > I'm curious if anyone has context on the "os-" part of the format? > > >>>> > > > >>>> > My memory of the Nova policy mess...* Nova's policy rules traditionally followed the patterns of the code > > >>>> > ** Yes, horrible, but it happened.* The code used to have the OpenStack API and the EC2 API, hence the "os"* API used to expand with extensions, so the policy name is often based on extensions** note most of the extension code has now gone, including lots of related policies* Policy in code was focused on getting us to a place where we could rename policy** Whoop whoop by the way, it feels like we are really close to something sensible now! > > >>>> > Lance Bragstad wrote: > > >>>> > Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? > > >>>> > I could go either way as I think about "list servers" in the API.But my preference is for the URL stub and POST, GET, etc. > > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote:If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). > > >>>> > +1The API is known as "compute" in api-ref, so the policy should be for "compute", etc. > > >>>> > > >>>> Agree on mapping the policy name with api-ref as much as possible. Other than policy name having 'os-', we have 'os-' in resource name also in nova API url like /os-agents, /os-aggregates etc (almost every resource except servers , flavors). As we cannot get rid of those from API url, we need to keep the same in policy naming too? or we can have policy name like compute:agents:create/post but that mismatch from api-ref where agents resource url is os-agents. > > >>> > > >>> > > >>> Good question. I think this depends on how the service does policy enforcement. > > >>> > > >>> I know we did something like this in keystone, which required policy names and method names to be the same: > > >>> > > >>> "identity:list_users": "..." > > >>> > > >>> Because the initial implementation of policy enforcement used a decorator like this: > > >>> > > >>> from keystone import controller > > >>> > > >>> @controller.protected > > >>> def list_users(self): > > >>> ... > > >>> > > >>> Having the policy name the same as the method name made it easier for the decorator implementation to resolve the policy needed to protect the API because it just looked at the name of the wrapped method. The advantage was that it was easy to implement new APIs because you only needed to add a policy, implement the method, and make sure you decorate the implementation. > > >>> > > >>> While this worked, we are moving away from it entirely. The decorator implementation was ridiculously complicated. Only a handful of keystone developers understood it. With the addition of system-scope, it would have only become more convoluted. It also enables a much more copy-paste pattern (e.g., so long as I wrap my method with this decorator implementation, things should work right?). Instead, we're calling enforcement within the controller implementation to ensure things are easier to understand. It requires developers to be cognizant of how different token types affect the resources within an API. That said, coupling the policy name to the method name is no longer a requirement for keystone. > > >>> > > >>> Hopefully, that helps explain why we needed them to match. > > >>> > > >>>> > > >>>> > > >>>> Also we have action API (i know from nova not sure from other services) like POST /servers/{server_id}/action {addSecurityGroup} and their current policy name is all inconsistent. few have policy name including their resource name like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in policy name like "os_compute_api:os-admin-actions:reset_state" and few has direct action name like "os_compute_api:os-console-output" > > >>> > > >>> > > >>> Since the actions API relies on the request body and uses a single HTTP method, does it make sense to have the HTTP method in the policy name? It feels redundant, and we might be able to establish a convention that's more meaningful for things like action APIs. It looks like cinder has a similar pattern [0]. > > >>> > > >>> [0] https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > > >>> > > >>>> > > >>>> > > >>>> May be we can make them consistent with :: or any better opinion. > > >>>> > > >>>> > From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. > > >>>> > > > >>>> > I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: > > >>>> > * move to new consistent policy name, deprecate existing name* hardcode scope check to project, system or user** (user, yes... keypairs, yuck, but its how they work)** deprecate in rule scope checks, which are largely bogus in Nova anyway* make read/write/admin distinction** therefore adding the "noop" role, amount other things > > >>>> > > >>>> + policy granularity. > > >>>> > > >>>> It is good idea to make the policy improvement all together and for all rules as you mentioned. But my worries is how much load it will be on operator side to migrate all policy rules at same time? What will be the deprecation period etc which i think we can discuss on proposed spec - https://review.openstack.org/#/c/547850 > > >>> > > >>> > > >>> Yeah, that's another valid concern. I know at least one operator has weighed in already. I'm curious if operators have specific input here. > > >>> > > >>> It ultimately depends on if they override existing policies or not. If a deployment doesn't have any overrides, it should be a relatively simple change for operators to consume. > > >>> > > >>>> > > >>>> > > >>>> > > >>>> -gmann > > >>>> > > >>>> > Thanks,John __________________________________________________________________________ > > >>>> > OpenStack Development Mailing List (not for usage questions) > > >>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >>>> > > > >>>> > > >>>> > > >>>> > > >>>> __________________________________________________________________________ > > >>>> OpenStack Development Mailing List (not for usage questions) > > >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >> > > >> __________________________________________________________________________ > > >> OpenStack Development Mailing List (not for usage questions) > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From jean-philippe at evrard.me Mon Oct 15 08:27:39 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 15 Oct 2018 10:27:39 +0200 Subject: [Openstack-operators] [openstack-dev] [goals][upgrade-checkers] Week R-26 Update In-Reply-To: References: Message-ID: On Fri, 2018-10-12 at 17:05 -0500, Matt Riedemann wrote: > The big update this week is version 0.1.0 of oslo.upgradecheck was > released. The documentation along with usage examples can be found > here > [1]. A big thanks to Ben Nemec for getting that done since a few > projects were waiting for it. > > In other updates, some changes were proposed in other projects [2]. > > And finally, Lance Bragstad and I had a discussion this week [3] > about > the validity of upgrade checks looking for deleted configuration > options. The main scenario I'm thinking about here is FFU where > someone > is going from Mitaka to Pike. Let's say a config option was > deprecated > in Newton and then removed in Ocata. As the operator is rolling > through > from Mitaka to Pike, they might have missed the deprecation signal > in > Newton and removal in Ocata. Does that mean we should have upgrade > checks that look at the configuration for deleted options, or > options > where the deprecated alias is removed? My thought is that if things > will > not work once they get to the target release and restart the service > code, which would definitely impact the upgrade, then checking for > those > scenarios is probably OK. If on the other hand the removed options > were > just tied to functionality that was removed and are otherwise not > causing any harm then I don't think we need a check for that. It was > noted that oslo.config has a new validation tool [4] so that would > take > care of some of this same work if run during upgrades. So I think > whether or not an upgrade check should be looking for config option > removal ultimately depends on the severity of what happens if the > manual > intervention to handle that removed option is not performed. That's > pretty broad, but these upgrade checks aren't really set in stone > for > what is applied to them. I'd like to get input from others on this, > especially operators and if they would find these types of checks > useful. > > [1] https://docs.openstack.org/oslo.upgradecheck/latest/ > [2] https://storyboard.openstack.org/#!/story/2003657 > [3] > http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17 > [4] > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html > Hey, Nice topic, thanks Matt! TL:DR; I would rather fail explicitly for all removals, warning on all deprecations. My concern is, by being more surgical, we'd have to decide what's "not causing any harm" (and I think deployers/users are best to determine what's not causing them any harm). Also, it's probably more work to classify based on "severity". The quick win here (for upgrade-checks) is not about being smart, but being an exhaustive, standardized across projects, and _always used_ source of truth for upgrades, which is complemented by release notes. Long answer: At some point in the past, I was working full time on upgrades using OpenStack-Ansible. Our process was the following: 1) Read all the project's releases notes to find upgrade documentation 2) With said release notes, Adapt our deploy tools to handle the upgrade, or/and write ourselves extra documentation+release notes for our deployers. 3) Try the upgrade manually, fail because some release note was missing x or y. Find root cause and retry from step 2 until success. Here is where I see upgrade checkers improving things: 1) No need for deployment projects to parse all release notes for configuration changes, as tooling to upgrade check would be directly outputting things that need to change for scenario x or y that is included in the deployment project. No need to iterate either. 2) Test real deployer use cases. The deployers using openstack-ansible have ultimate flexibility without our code changes. Which means they may have different code paths than our gating. Including these checks in all upgrades, always requiring them to pass, and making them explicit about the changes is tremendously helpful for deployers: - If config deprecations are handled as warnings as part of the same process, we will output said warnings to generate a list of action items for the deployers. We would use only one tool as source of truth for giving the action items (and still continue the upgrade); - If config removals are handled as errors, the upgrade will fail, which is IMO normal, as the deployer would not have respected its action items. In OSA, we could probably implement a deployer override (variable). It would allow the deployers an explicit bypass of an upgrade failure. "I know I am doing this!". It would be useful for doing multiple serial upgrades. In that case, deployers could then share together their "recipes" for handling upgrade failure bypasses for certain multi-upgrade (jumps) scenarios. After a while, we could think of feeding those back to upgrade checkers. 3) I like the approach of having oslo-config-validator. However, I must admit it's not part of our process to always validate a config file before trying to start a service in OSA. I am not sure where other deployment projects are in terms of that usage. I am not familiar with upgrade checker code, but I would love to see it re-using oslo-config- validator, as it would be the unique source of truth for upgrades before the upgrade happens (vs having to do multiple steps). If I am completely out of my league here, tell me. Just my 2 cents. Jean-Philippe Evrard (evrardjp) From mrunge at redhat.com Mon Oct 15 09:11:09 2018 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 15 Oct 2018 11:11:09 +0200 Subject: [Openstack-operators] [openstack-dev] [SIGS] Ops Tools SIG In-Reply-To: <20181012122059.GB3532@sm-xps> References: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> <20181012122059.GB3532@sm-xps> Message-ID: <44fd333b-1133-7a31-93f6-ee9035383210@redhat.com> On 12/10/2018 14:21, Sean McGinnis wrote: > On Fri, Oct 12, 2018 at 11:25:20AM +0200, Martin Magr wrote: >> Greetings guys, >> >> On Thu, Oct 11, 2018 at 4:19 PM, Miguel Angel Ajo Pelayo < >> majopela at redhat.com> wrote: >> >>> Adding the mailing lists back to your reply, thank you :) >>> >>> I guess that +melvin.hillsman at huawei.com can >>> help us a little bit organizing the SIG, >>> but I guess the first thing would be collecting a list of tools which >>> could be published >>> under the umbrella of the SIG, starting by the ones already in Osops. >>> >>> Publishing documentation for those tools, and the catalog under >>> docs.openstack.org >>> is possibly the next step (or a parallel step). >>> >>> >>> On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister >>> wrote: >>> >>>> Hi Miguel, >>>> >>>> I would love to join this. What do I need to do? >>>> >>>> Sent from my iPhone >>>> >>>> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo >>>> wrote: >>>> >>>> Hello >>>> >>>> Yesterday, during the Oslo meeting we discussed [6] the possibility >>>> of creating a new Special Interest Group [1][2] to provide home and release >>>> means for operator related tools [3] [4] [5] >>>> >>>> >> all of those tools have python dependencies related to openstack such as >> python-openstackclient or python-pbr. Which is exactly the reason why we >> moved osops-tools-monitoring-oschecks packaging away from OpsTools SIG to >> Cloud SIG. AFAIR we had some issues of having opstools SIG being dependent >> on openstack SIG. I believe that Cloud SIG is proper home for tools like >> [3][4][5] as they are related to OpenStack anyway. OpsTools SIG contains >> general tools like fluentd, sensu, collectd. >> >> >> Hope this helps, >> Martin >> > > Hey Martin, > > I'm not sure I understand the issue with these tools have dependencies on other > packages and the relationship to SIG ownership. Is your concern (or the history > of a concern you are pointing out) that the tools would have a more difficult > time if they required updates to dependencies if they are owned by a different > group? > > Thanks! > Sean > Hello, the mentioned sigs (opstools/cloud) are in CentOS scope and mention repository dependencies. That shouldn't bother us here now. There is already a SIG under the CentOS project, providing tools for operators[7], but also documentation and integrational bits. Also, there is some overlap with other groups and SIGs, such as Barometer[8]. Since there is already some duplication, I don't know where it makes sense to have a single group for this purpose? If that hasn't been clear yet, I'd be absolutely interested in joining/helping this effort. Matthias [7] https://wiki.centos.org/SpecialInterestGroup/OpsTools [8] https://wiki.opnfv.org/collector/pages.action?key=fastpath -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander From jimmy at openstack.org Mon Oct 15 20:01:07 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 15 Oct 2018 15:01:07 -0500 Subject: [Openstack-operators] Forum Schedule - Seeking Community Review Message-ID: <5BC4F203.4000904@openstack.org> Hi - The Forum schedule is now up (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). If you see a glaring content conflict within the Forum itself, please let me know. You can also view the Full Schedule in the attached PDF if that makes life easier... NOTE: BoFs and WGs are still not all up on the schedule. No need to let us know :) Cheers, Jimmy -------------- next part -------------- A non-text attachment was scrubbed... Name: full-schedule (2).pdf Type: application/pdf Size: 64066 bytes Desc: not available URL: From openstack at nemebean.com Mon Oct 15 20:29:58 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 15 Oct 2018 15:29:58 -0500 Subject: [Openstack-operators] [openstack-dev] [goals][upgrade-checkers] Week R-26 Update In-Reply-To: References: Message-ID: <1d7bfda7-615e-3e21-9174-631ca8d3910e@nemebean.com> On 10/15/18 3:27 AM, Jean-Philippe Evrard wrote: > On Fri, 2018-10-12 at 17:05 -0500, Matt Riedemann wrote: >> The big update this week is version 0.1.0 of oslo.upgradecheck was >> released. The documentation along with usage examples can be found >> here >> [1]. A big thanks to Ben Nemec for getting that done since a few >> projects were waiting for it. >> >> In other updates, some changes were proposed in other projects [2]. >> >> And finally, Lance Bragstad and I had a discussion this week [3] >> about >> the validity of upgrade checks looking for deleted configuration >> options. The main scenario I'm thinking about here is FFU where >> someone >> is going from Mitaka to Pike. Let's say a config option was >> deprecated >> in Newton and then removed in Ocata. As the operator is rolling >> through >> from Mitaka to Pike, they might have missed the deprecation signal >> in >> Newton and removal in Ocata. Does that mean we should have upgrade >> checks that look at the configuration for deleted options, or >> options >> where the deprecated alias is removed? My thought is that if things >> will >> not work once they get to the target release and restart the service >> code, which would definitely impact the upgrade, then checking for >> those >> scenarios is probably OK. If on the other hand the removed options >> were >> just tied to functionality that was removed and are otherwise not >> causing any harm then I don't think we need a check for that. It was >> noted that oslo.config has a new validation tool [4] so that would >> take >> care of some of this same work if run during upgrades. So I think >> whether or not an upgrade check should be looking for config option >> removal ultimately depends on the severity of what happens if the >> manual >> intervention to handle that removed option is not performed. That's >> pretty broad, but these upgrade checks aren't really set in stone >> for >> what is applied to them. I'd like to get input from others on this, >> especially operators and if they would find these types of checks >> useful. >> >> [1] https://docs.openstack.org/oslo.upgradecheck/latest/ >> [2] https://storyboard.openstack.org/#!/story/2003657 >> [3] >> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17 >> [4] >> http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html >> > > Hey, > > Nice topic, thanks Matt! > > TL:DR; I would rather fail explicitly for all removals, warning on all > deprecations. My concern is, by being more surgical, we'd have to > decide what's "not causing any harm" (and I think deployers/users are > best to determine what's not causing them any harm). > Also, it's probably more work to classify based on "severity". > The quick win here (for upgrade-checks) is not about being smart, but > being an exhaustive, standardized across projects, and _always used_ > source of truth for upgrades, which is complemented by release notes. > > Long answer: > > At some point in the past, I was working full time on upgrades using > OpenStack-Ansible. > > Our process was the following: > 1) Read all the project's releases notes to find upgrade documentation > 2) With said release notes, Adapt our deploy tools to handle the > upgrade, or/and write ourselves extra documentation+release notes for > our deployers. > 3) Try the upgrade manually, fail because some release note was missing > x or y. Find root cause and retry from step 2 until success. > > Here is where I see upgrade checkers improving things: > 1) No need for deployment projects to parse all release notes for > configuration changes, as tooling to upgrade check would be directly > outputting things that need to change for scenario x or y that is > included in the deployment project. No need to iterate either. > > 2) Test real deployer use cases. The deployers using openstack-ansible > have ultimate flexibility without our code changes. Which means they > may have different code paths than our gating. Including these checks > in all upgrades, always requiring them to pass, and making them > explicit about the changes is tremendously helpful for deployers: > - If config deprecations are handled as warnings as part of the same > process, we will output said warnings to generate a list of action > items for the deployers. We would use only one tool as source of truth > for giving the action items (and still continue the upgrade); > - If config removals are handled as errors, the upgrade will fail, > which is IMO normal, as the deployer would not have respected its > action items. Note that deprecated config opts should already be generating warnings in the logs. It is also possible now to use fatal-deprecations with config opts: https://github.com/openstack/oslo.config/commit/5f8b0e0185dafeb68cf04590948b9c9f7d727051 I'm not sure that's exactly what you're talking about, but those might be useful to get us at least part of the way there. > > In OSA, we could probably implement a deployer override (variable). It > would allow the deployers an explicit bypass of an upgrade failure. "I > know I am doing this!". It would be useful for doing multiple serial > upgrades. > > In that case, deployers could then share together their "recipes" for > handling upgrade failure bypasses for certain multi-upgrade (jumps) > scenarios. After a while, we could think of feeding those back to > upgrade checkers. > > 3) I like the approach of having oslo-config-validator. However, I must > admit it's not part of our process to always validate a config file > before trying to start a service in OSA. I am not sure where other > deployment projects are in terms of that usage. I am not familiar with > upgrade checker code, but I would love to see it re-using oslo-config- > validator, as it would be the unique source of truth for upgrades > before the upgrade happens (vs having to do multiple steps). > If I am completely out of my league here, tell me. This is a bit tricky as the validator requires information that is not necessarily available in a production environment. Specifically, it either needs the oslo-config-generator configuration file that lists all of the namespaces a project uses, or it needs a generated machine-readable sample config that contains all of the opt data. The latter is not generally available today, and I'm not sure whether the former is either. A quick pip install of an OpenStack service suggests that it is not. Ideally, the machine-readable sample config would be available from packages anyway as it has other uses too, but it's a pretty big ask to get all of the packagers shipping that this cycle. I'm not sure how it would work with pip installs either, although it seems like we should be able to figure out something there. Anyway, not saying we shouldn't do it, but I want to make it clear that this isn't as simple as just adding one more check to the upgrade checkers. There are some other dependencies to doing this in a non-service-specific way. > > Just my 2 cents. > Jean-Philippe Evrard (evrardjp) > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From Tim.Bell at cern.ch Tue Oct 16 06:37:50 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Tue, 16 Oct 2018 06:37:50 +0000 Subject: [Openstack-operators] [openstack-dev] Forum Schedule - Seeking Community Review In-Reply-To: <5BC4F203.4000904@openstack.org> References: <5BC4F203.4000904@openstack.org> Message-ID: <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> Jimmy, While it's not a clash within the forum, there are two sessions for Ironic scheduled at the same time on Tuesday at 14h20, each of which has Julia as a speaker. Tim -----Original Message----- From: Jimmy McArthur Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, 15 October 2018 at 22:04 To: "OpenStack Development Mailing List (not for usage questions)" , "OpenStack-operators at lists.openstack.org" , "community at lists.openstack.org" Subject: [openstack-dev] Forum Schedule - Seeking Community Review Hi - The Forum schedule is now up (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). If you see a glaring content conflict within the Forum itself, please let me know. You can also view the Full Schedule in the attached PDF if that makes life easier... NOTE: BoFs and WGs are still not all up on the schedule. No need to let us know :) Cheers, Jimmy From majopela at redhat.com Tue Oct 16 09:17:41 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Tue, 16 Oct 2018 11:17:41 +0200 Subject: [Openstack-operators] [openstack-dev] [SIGS] Ops Tools SIG In-Reply-To: <44fd333b-1133-7a31-93f6-ee9035383210@redhat.com> References: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> <20181012122059.GB3532@sm-xps> <44fd333b-1133-7a31-93f6-ee9035383210@redhat.com> Message-ID: Hi, Matthias and I talked this morning about this topic, and we came to realize that there's room for/would be beneficial to have a common place for: a) Documentation about second day operator tools which can be useful with OpenStack, links to repositories or availability for every distribution. b) Deployment documentation/config snippets/deployment scripts for those tools in integration with OpenStack. c) Operator tools and bits which are developed or maintained on OpenStack repos, specially the OpenStack related bits of those tools (plugins, etc), d) Home the organisation of ops-related rooms during OpenStack events, general ones related to OpenStack, and also the distro-specific ones for the distros interested in participation. Does this scope for the SIG make sense to everyone willing to participate? Best regards, Miguel Ángel. On Mon, Oct 15, 2018 at 11:12 AM Matthias Runge wrote: > On 12/10/2018 14:21, Sean McGinnis wrote: > > On Fri, Oct 12, 2018 at 11:25:20AM +0200, Martin Magr wrote: > >> Greetings guys, > >> > >> On Thu, Oct 11, 2018 at 4:19 PM, Miguel Angel Ajo Pelayo < > >> majopela at redhat.com> wrote: > >> > >>> Adding the mailing lists back to your reply, thank you :) > >>> > >>> I guess that +melvin.hillsman at huawei.com > can > >>> help us a little bit organizing the SIG, > >>> but I guess the first thing would be collecting a list of tools which > >>> could be published > >>> under the umbrella of the SIG, starting by the ones already in Osops. > >>> > >>> Publishing documentation for those tools, and the catalog under > >>> docs.openstack.org > >>> is possibly the next step (or a parallel step). > >>> > >>> > >>> On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister > >>> wrote: > >>> > >>>> Hi Miguel, > >>>> > >>>> I would love to join this. What do I need to do? > >>>> > >>>> Sent from my iPhone > >>>> > >>>> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo < > majopela at redhat.com> > >>>> wrote: > >>>> > >>>> Hello > >>>> > >>>> Yesterday, during the Oslo meeting we discussed [6] the > possibility > >>>> of creating a new Special Interest Group [1][2] to provide home and > release > >>>> means for operator related tools [3] [4] [5] > >>>> > >>>> > >> all of those tools have python dependencies related to openstack > such as > >> python-openstackclient or python-pbr. Which is exactly the reason why we > >> moved osops-tools-monitoring-oschecks packaging away from OpsTools SIG > to > >> Cloud SIG. AFAIR we had some issues of having opstools SIG being > dependent > >> on openstack SIG. I believe that Cloud SIG is proper home for tools like > >> [3][4][5] as they are related to OpenStack anyway. OpsTools SIG contains > >> general tools like fluentd, sensu, collectd. > >> > >> > >> Hope this helps, > >> Martin > >> > > > > Hey Martin, > > > > I'm not sure I understand the issue with these tools have dependencies > on other > > packages and the relationship to SIG ownership. Is your concern (or the > history > > of a concern you are pointing out) that the tools would have a more > difficult > > time if they required updates to dependencies if they are owned by a > different > > group? > > > > Thanks! > > Sean > > > > Hello, > > the mentioned sigs (opstools/cloud) are in CentOS scope and mention > repository dependencies. That shouldn't bother us here now. > > > There is already a SIG under the CentOS project, providing tools for > operators[7], but also documentation and integrational bits. > > Also, there is some overlap with other groups and SIGs, such as > Barometer[8]. > > Since there is already some duplication, I don't know where it makes > sense to have a single group for this purpose? > > If that hasn't been clear yet, I'd be absolutely interested in > joining/helping this effort. > > > Matthias > > > > [7] https://wiki.centos.org/SpecialInterestGroup/OpsTools > [8] https://wiki.opnfv.org/collector/pages.action?key=fastpath > > -- > Matthias Runge > > Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Michael Cunningham, > Michael O'Neill, Eric Shander > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Oct 16 09:59:49 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 16 Oct 2018 18:59:49 +0900 Subject: [Openstack-operators] [goals][upgrade-checkers] Week R-26 Update In-Reply-To: References: Message-ID: <1667c51369c.f0e01e0f236210.2319928222358081529@ghanshyammann.com> ---- On Sat, 13 Oct 2018 07:05:53 +0900 Matt Riedemann wrote ---- > The big update this week is version 0.1.0 of oslo.upgradecheck was > released. The documentation along with usage examples can be found here > [1]. A big thanks to Ben Nemec for getting that done since a few > projects were waiting for it. > > In other updates, some changes were proposed in other projects [2]. > > And finally, Lance Bragstad and I had a discussion this week [3] about > the validity of upgrade checks looking for deleted configuration > options. The main scenario I'm thinking about here is FFU where someone > is going from Mitaka to Pike. Let's say a config option was deprecated > in Newton and then removed in Ocata. As the operator is rolling through > from Mitaka to Pike, they might have missed the deprecation signal in > Newton and removal in Ocata. Does that mean we should have upgrade > checks that look at the configuration for deleted options, or options > where the deprecated alias is removed? My thought is that if things will > not work once they get to the target release and restart the service > code, which would definitely impact the upgrade, then checking for those > scenarios is probably OK. If on the other hand the removed options were > just tied to functionality that was removed and are otherwise not > causing any harm then I don't think we need a check for that. It was > noted that oslo.config has a new validation tool [4] so that would take > care of some of this same work if run during upgrades. So I think > whether or not an upgrade check should be looking for config option > removal ultimately depends on the severity of what happens if the manual > intervention to handle that removed option is not performed. That's > pretty broad, but these upgrade checks aren't really set in stone for > what is applied to them. I'd like to get input from others on this, > especially operators and if they would find these types of checks useful. > > [1] https://docs.openstack.org/oslo.upgradecheck/latest/ > [2] https://storyboard.openstack.org/#!/story/2003657 > [3] > http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17 > [4] > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html Other point is about policy change and how we should accommodate those in upgrade-checks. There are below categorization of policy changes: 1. Policy rule name has been changed. Upgrade Impact: If that policy rule is overridden in policy.json then, yes we need to tell this in upgrade-check CLI. If not overridden which means operators depends on policy in code then, it would not impact their upgrade. 2. Policy rule (deprecated) has been removed Upgrade Impact: YES, as it can impact their API access after upgrade. This needs to be cover in upgrade-checks 3. Default value (including scope) of Policy rule has been changed Upgrade Impact: YES, this can change the access level of their API after upgrade. This needs to be cover in upgrade-checks 4. New Policy rule introduced Upgrade Impact: YES, same reason. I think policy changes can be added in upgrade checker by checking all the above category because everything will impact upgrade? For Example, cinder policy change [1]: "Add granularity to the volume_extension:volume_type_encryption policy with the addition of distinct actions for create, get, update, and delete: volume_extension:volume_type_encryption:create volume_extension:volume_type_encryption:get volume_extension:volume_type_encryption:update volume_extension:volume_type_encryption:delete To address backwards compatibility, the new rules added to the volume_type.py policy file, default to the existing rule, volume_extension:volume_type_encryption, if it is set to a non-default value. " [1] https://docs.openstack.org/releasenotes/cinder/unreleased.html#upgrade-notes -gmann > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From ignaziocassano at gmail.com Tue Oct 16 13:27:02 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 16 Oct 2018 15:27:02 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata Message-ID: Hi everybody, when on my ocata installation based on centos7 I update (only update not changing openstack version) some kvm compute nodes, I diescovered uuid in resource_providers nova_api db table are different from uuid in compute_nodes nova db table. This causes several errors in nova-compute service, because it not able to receive instances anymore. Aligning uuid from compute_nodes solves this problem. Could anyone tel me if it is a bug ? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Oct 16 13:45:08 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 16 Oct 2018 09:45:08 -0400 Subject: [Openstack-operators] [openstack-dev] [SIGS] Ops Tools SIG In-Reply-To: References: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> <20181012122059.GB3532@sm-xps> <44fd333b-1133-7a31-93f6-ee9035383210@redhat.com> Message-ID: On Tue, Oct 16, 2018 at 5:19 AM Miguel Angel Ajo Pelayo wrote: > > Hi, > > Matthias and I talked this morning about this topic, and we came to realize > that there's room for/would be beneficial to have a common place for: > > a) Documentation about second day operator tools which can be > useful with OpenStack, links to repositories or availability for every distribution. > Sounds like a Natural extension to the Ops Guide [1] which we've been working to return to relevance. I suppose this could also be a wiki like [2], but we should at least reference it in the guide. In any event, massive cleanup of old, outdated content really needs to be undertaken. That should be the other part of the mission I think. > b) Deployment documentation/config snippets/deployment scripts for those tools > in integration with OpenStack. > > c) Operator tools and bits which are developed or maintained on OpenStack repos, > specially the OpenStack related bits of those tools (plugins, etc), > We should probably try and revive [3] and make use of that more effectively to address b and c. We've been trying to encourage contribution to it for years, but it needs more contributors and some TLC > d) Home the organisation of ops-related rooms during OpenStack events, general > ones related to OpenStack, and also the distro-specific ones for the distros interested > in participation. > I'm not exactly sure what you mean by this item. We currently have a team responsible for meetups and pushing Ops-related content into the Forum at Summits. Do you propose merging the Ops Meetup Team into this SIG? > > Does this scope for the SIG make sense to everyone willing to participate? > > > Best regards, > Miguel Ángel. > [1] https://docs.openstack.org/operations-guide/ [2] https://wiki.openstack.org/wiki/Operations [3] https://wiki.openstack.org/wiki/Osops#Code -Erik > > On Mon, Oct 15, 2018 at 11:12 AM Matthias Runge wrote: >> >> On 12/10/2018 14:21, Sean McGinnis wrote: >> > On Fri, Oct 12, 2018 at 11:25:20AM +0200, Martin Magr wrote: >> >> Greetings guys, >> >> >> >> On Thu, Oct 11, 2018 at 4:19 PM, Miguel Angel Ajo Pelayo < >> >> majopela at redhat.com> wrote: >> >> >> >>> Adding the mailing lists back to your reply, thank you :) >> >>> >> >>> I guess that +melvin.hillsman at huawei.com can >> >>> help us a little bit organizing the SIG, >> >>> but I guess the first thing would be collecting a list of tools which >> >>> could be published >> >>> under the umbrella of the SIG, starting by the ones already in Osops. >> >>> >> >>> Publishing documentation for those tools, and the catalog under >> >>> docs.openstack.org >> >>> is possibly the next step (or a parallel step). >> >>> >> >>> >> >>> On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister >> >>> wrote: >> >>> >> >>>> Hi Miguel, >> >>>> >> >>>> I would love to join this. What do I need to do? >> >>>> >> >>>> Sent from my iPhone >> >>>> >> >>>> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo >> >>>> wrote: >> >>>> >> >>>> Hello >> >>>> >> >>>> Yesterday, during the Oslo meeting we discussed [6] the possibility >> >>>> of creating a new Special Interest Group [1][2] to provide home and release >> >>>> means for operator related tools [3] [4] [5] >> >>>> >> >>>> >> >> all of those tools have python dependencies related to openstack such as >> >> python-openstackclient or python-pbr. Which is exactly the reason why we >> >> moved osops-tools-monitoring-oschecks packaging away from OpsTools SIG to >> >> Cloud SIG. AFAIR we had some issues of having opstools SIG being dependent >> >> on openstack SIG. I believe that Cloud SIG is proper home for tools like >> >> [3][4][5] as they are related to OpenStack anyway. OpsTools SIG contains >> >> general tools like fluentd, sensu, collectd. >> >> >> >> >> >> Hope this helps, >> >> Martin >> >> >> > >> > Hey Martin, >> > >> > I'm not sure I understand the issue with these tools have dependencies on other >> > packages and the relationship to SIG ownership. Is your concern (or the history >> > of a concern you are pointing out) that the tools would have a more difficult >> > time if they required updates to dependencies if they are owned by a different >> > group? >> > >> > Thanks! >> > Sean >> > >> >> Hello, >> >> the mentioned sigs (opstools/cloud) are in CentOS scope and mention >> repository dependencies. That shouldn't bother us here now. >> >> >> There is already a SIG under the CentOS project, providing tools for >> operators[7], but also documentation and integrational bits. >> >> Also, there is some overlap with other groups and SIGs, such as >> Barometer[8]. >> >> Since there is already some duplication, I don't know where it makes >> sense to have a single group for this purpose? >> >> If that hasn't been clear yet, I'd be absolutely interested in >> joining/helping this effort. >> >> >> Matthias >> >> >> >> [7] https://wiki.centos.org/SpecialInterestGroup/OpsTools >> [8] https://wiki.opnfv.org/collector/pages.action?key=fastpath >> >> -- >> Matthias Runge >> >> Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, >> Commercial register: Amtsgericht Muenchen, HRB 153243, >> Managing Directors: Charles Cachera, Michael Cunningham, >> Michael O'Neill, Eric Shander >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > -- > Miguel Ángel Ajo > OSP / Networking DFG, OVN Squad Engineering > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From sbauza at redhat.com Tue Oct 16 14:11:42 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 16 Oct 2018 16:11:42 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: Message-ID: On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano wrote: > Hi everybody, > when on my ocata installation based on centos7 I update (only update not > changing openstack version) some kvm compute nodes, I diescovered uuid in > resource_providers nova_api db table are different from uuid in > compute_nodes nova db table. > This causes several errors in nova-compute service, because it not able to > receive instances anymore. > Aligning uuid from compute_nodes solves this problem. > Could anyone tel me if it is a bug ? > > What do you mean by "updating some compute nodes" ? In Nova, we consider uniqueness of compute nodes by a tuple (host, hypervisor_hostname) where host is your nova-compute service name for this compute host, and hypervisor_hostname is in the case of libvirt the 'hostname' reported by the libvirt API [1] If somehow one of the two values change, then the Nova Resource Tracker will consider this new record as a separate compute node, hereby creating a new compute_nodes table record, and then a new UUID. Could you please check your compute_nodes table and see whether some entries were recently created ? -Sylvain [1] https://libvirt.org/docs/libvirt-appdev-guide-python/en-US/html/libvirt_application_development_guide_using_python-Connections-Host_Info.html Regards > Ignazio > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 16 14:22:56 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 16 Oct 2018 16:22:56 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: Message-ID: Hi Sylvain, I mean launching "yum update" on compute nodes. Now I am going to describe what happened. We had an environment made up of 3 kvm nodes. We added two new compute nodes. Since the addition has been made after 3 or 4 months after the first openstack installation, the 2 new compute nodes are updated to most recent ocata packages. So we launched a yum update also on the 3 old compute nodes. After the above operations, the resource_providers table contains wrong uuid for the 3 old nodes and they stooped to work. Updating resource_providers uuid getting them from compute_nodes table, the old 3 nodes return to work fine. Regards Ignazio Il giorno mar 16 ott 2018 alle ore 16:11 Sylvain Bauza ha scritto: > > > On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano > wrote: > >> Hi everybody, >> when on my ocata installation based on centos7 I update (only update not >> changing openstack version) some kvm compute nodes, I diescovered uuid in >> resource_providers nova_api db table are different from uuid in >> compute_nodes nova db table. >> This causes several errors in nova-compute service, because it not able >> to receive instances anymore. >> Aligning uuid from compute_nodes solves this problem. >> Could anyone tel me if it is a bug ? >> >> > What do you mean by "updating some compute nodes" ? In Nova, we consider > uniqueness of compute nodes by a tuple (host, hypervisor_hostname) where > host is your nova-compute service name for this compute host, and > hypervisor_hostname is in the case of libvirt the 'hostname' reported by > the libvirt API [1] > > If somehow one of the two values change, then the Nova Resource Tracker > will consider this new record as a separate compute node, hereby creating a > new compute_nodes table record, and then a new UUID. > Could you please check your compute_nodes table and see whether some > entries were recently created ? > > -Sylvain > > [1] > https://libvirt.org/docs/libvirt-appdev-guide-python/en-US/html/libvirt_application_development_guide_using_python-Connections-Host_Info.html > > Regards >> Ignazio >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Tue Oct 16 15:05:10 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Tue, 16 Oct 2018 17:05:10 +0200 Subject: [Openstack-operators] [publiccloud-wg] Reminder weekly meeting Public Cloud WG Message-ID: <5c2a917d-2a77-b7da-46d5-9fb02c6018ba@citynetwork.eu> Hi everyone, Time for a new meeting for PCWG - tomorrow Wednesday 0700 UTC in #openstack-publiccloud! Agenda found at https://etherpad.openstack.org/p/publiccloud-wg Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From lbragstad at gmail.com Tue Oct 16 15:11:19 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 16 Oct 2018 10:11:19 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: <1666d1bcecf.e634f9cf181694.2527311199687749309@ghanshyammann.com> References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> <1666d1bcecf.e634f9cf181694.2527311199687749309@ghanshyammann.com> Message-ID: It happened. Documentation is hot off the press and ready for you to read [0]. As always, feel free to raise concerns, comments, or questions any time. I appreciate everyone's help in nailing this down. [0] https://docs.openstack.org/oslo.policy/latest/user/usage.html#naming-policies On Sat, Oct 13, 2018 at 6:07 AM Ghanshyam Mann wrote: > ---- On Sat, 13 Oct 2018 01:45:17 +0900 Lance Bragstad < > lbragstad at gmail.com> wrote ---- > > Sending a follow up here quick. > > The reviewers actively participating in [0] are nearing a conclusion. > Ultimately, the convention is going to be: > > > :[:][:]:[:] > > Details about what that actually means can be found in the review [0]. > Each piece is denoted as being required or optional, along with examples. I > think this gives us a pretty good starting place, and the syntax is > flexible enough to support almost every policy naming convention we've > stumbled across. > > Now is the time if you have any final input or feedback. Thanks for > sticking with the discussion. > > Thanks Lance for working on this. Current version lgtm. I would like to > see some operators feedback also if this standard policy name format is > clear and easy understandable. > > -gmann > > > Lance > > [0] https://review.openstack.org/#/c/606214/ > > > > On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad > wrote: > > > > On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann > wrote: > > ---- On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad < > lbragstad at gmail.com> wrote ---- > > > > > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki > wrote: > > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > > wrote: > > > > > > > > Ideally I would like to see it in the form of least specific to > most specific. But more importantly in a way that there is no additional > delimiters between the service type and the resource. Finally, I do not > like the change of plurality depending on action type. > > > > > > > > I propose we consider > > > > > > > > ::[:] > > > > > > > > Example for keystone (note, action names below are strictly > examples I am fine with whatever form those actions take): > > > > identity:projects:create > > > > identity:projects:delete > > > > identity:projects:list > > > > identity:projects:get > > > > > > > > It keeps things simple and consistent when you're looking > through overrides / defaults. > > > > --Morgan > > > +1 -- I think the ordering if `resource` comes before > > > `action|subaction` will be more clean. > > > > > > ++ > > > These are excellent points. I especially like being able to omit > the convention about plurality. Furthermore, I'd like to add that I think > we should make the resource singular (e.g., project instead or projects). > For example: > > > compute:server:list > > > > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize > (or confirm-resize) > > > > Do we need "action" word there? I think action name itself should > convey the operation. IMO below notation without "äction" word looks clear > enough. what you say? > > > > compute:server:reboot > > compute:server:confirm_resize > > > > I agree. I simplified this in the current version up for review. > > -gmann > > > > > > > > Otherwise, someone might mistake compute:servers:get, as "list". > This is ultra-nick-picky, but something I thought of when seeing the usage > of "get_all" in policy names in favor of "list." > > > In summary, the new convention based on the most recent feedback > should be: > > > ::[:] > > > Rules:service-type is always defined in the service types authority > > > resources are always singular > > > Thanks to all for sticking through this tedious discussion. I > appreciate it. > > > /R > > > > > > Harry > > > > > > > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad < > lbragstad at gmail.com> wrote: > > > >> > > > >> Bumping this thread again and proposing two conventions based > on the discussion here. I propose we decide on one of the two following > conventions: > > > >> > > > >> :: > > > >> > > > >> or > > > >> > > > >> :_ > > > >> > > > >> Where is the corresponding service type of the > project [0], and is either create, get, list, update, or delete. I > think decoupling the method from the policy name should aid in consistency, > regardless of the underlying implementation. The HTTP method specifics can > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > > >> > > > >> I think the plurality of the resource should default to what > makes sense for the operation being carried out (e.g., list:foobars, > create:foobar). > > > >> > > > >> I don't mind the first one because it's clear about what the > delimiter is and it doesn't look weird when projects have something like: > > > >> > > > >> ::: > > > >> > > > >> If folks are ok with this, I can start working on some > documentation that explains the motivation for this. Afterward, we can > figure out how we want to track this work. > > > >> > > > >> What color do you want the shed to be? > > > >> > > > >> [0] https://service-types.openstack.org/service-types.json > > > >> [1] > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > > > >> > > > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad < > lbragstad at gmail.com> wrote: > > > >>> > > > >>> > > > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann < > gmann at ghanshyammann.com> wrote: > > > >>>> > > > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < > john at johngarbutt.com> wrote ---- > > > >>>> > tl;dr+1 consistent names > > > >>>> > I would make the names mirror the API... because the > Operator setting them knows the API, not the codeIgnore the crazy names in > Nova, I certainly hate them > > > >>>> > > > >>>> Big +1 on consistent naming which will help operator as well > as developer to maintain those. > > > >>>> > > > >>>> > > > > >>>> > Lance Bragstad wrote: > > > >>>> > > I'm curious if anyone has context on the "os-" part of > the format? > > > >>>> > > > > >>>> > My memory of the Nova policy mess...* Nova's policy rules > traditionally followed the patterns of the code > > > >>>> > ** Yes, horrible, but it happened.* The code used to have > the OpenStack API and the EC2 API, hence the "os"* API used to expand with > extensions, so the policy name is often based on extensions** note most of > the extension code has now gone, including lots of related policies* Policy > in code was focused on getting us to a place where we could rename policy** > Whoop whoop by the way, it feels like we are really close to something > sensible now! > > > >>>> > Lance Bragstad wrote: > > > >>>> > Thoughts on using create, list, update, and delete as > opposed to post, get, put, patch, and delete in the naming convention? > > > >>>> > I could go either way as I think about "list servers" in > the API.But my preference is for the URL stub and POST, GET, etc. > > > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad < > lbragstad at gmail.com> wrote:If we consider dropping "os", should we > entertain dropping "api", too? Do we have a good reason to keep "api"?I > wouldn't be opposed to simple service types (e.g "compute" or > "loadbalancer"). > > > >>>> > +1The API is known as "compute" in api-ref, so the policy > should be for "compute", etc. > > > >>>> > > > >>>> Agree on mapping the policy name with api-ref as much as > possible. Other than policy name having 'os-', we have 'os-' in resource > name also in nova API url like /os-agents, /os-aggregates etc (almost every > resource except servers , flavors). As we cannot get rid of those from API > url, we need to keep the same in policy naming too? or we can have policy > name like compute:agents:create/post but that mismatch from api-ref where > agents resource url is os-agents. > > > >>> > > > >>> > > > >>> Good question. I think this depends on how the service does > policy enforcement. > > > >>> > > > >>> I know we did something like this in keystone, which required > policy names and method names to be the same: > > > >>> > > > >>> "identity:list_users": "..." > > > >>> > > > >>> Because the initial implementation of policy enforcement used > a decorator like this: > > > >>> > > > >>> from keystone import controller > > > >>> > > > >>> @controller.protected > > > >>> def list_users(self): > > > >>> ... > > > >>> > > > >>> Having the policy name the same as the method name made it > easier for the decorator implementation to resolve the policy needed to > protect the API because it just looked at the name of the wrapped method. > The advantage was that it was easy to implement new APIs because you only > needed to add a policy, implement the method, and make sure you decorate > the implementation. > > > >>> > > > >>> While this worked, we are moving away from it entirely. The > decorator implementation was ridiculously complicated. Only a handful of > keystone developers understood it. With the addition of system-scope, it > would have only become more convoluted. It also enables a much more > copy-paste pattern (e.g., so long as I wrap my method with this decorator > implementation, things should work right?). Instead, we're calling > enforcement within the controller implementation to ensure things are > easier to understand. It requires developers to be cognizant of how > different token types affect the resources within an API. That said, > coupling the policy name to the method name is no longer a requirement for > keystone. > > > >>> > > > >>> Hopefully, that helps explain why we needed them to match. > > > >>> > > > >>>> > > > >>>> > > > >>>> Also we have action API (i know from nova not sure from other > services) like POST /servers/{server_id}/action {addSecurityGroup} and > their current policy name is all inconsistent. few have policy name > including their resource name like > "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in > policy name like "os_compute_api:os-admin-actions:reset_state" and few has > direct action name like "os_compute_api:os-console-output" > > > >>> > > > >>> > > > >>> Since the actions API relies on the request body and uses a > single HTTP method, does it make sense to have the HTTP method in the > policy name? It feels redundant, and we might be able to establish a > convention that's more meaningful for things like action APIs. It looks > like cinder has a similar pattern [0]. > > > >>> > > > >>> [0] > https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > > > >>> > > > >>>> > > > >>>> > > > >>>> May be we can make them consistent with > :: or any better opinion. > > > >>>> > > > >>>> > From: Lance Bragstad > The topic of > having consistent policy names has popped up a few times this week. > > > >>>> > > > > >>>> > I would love to have this nailed down before we go through > all the policy rules again. In my head I hope in Nova we can go through > each policy rule and do the following: > > > >>>> > * move to new consistent policy name, deprecate existing > name* hardcode scope check to project, system or user** (user, yes... > keypairs, yuck, but its how they work)** deprecate in rule scope checks, > which are largely bogus in Nova anyway* make read/write/admin distinction** > therefore adding the "noop" role, amount other things > > > >>>> > > > >>>> + policy granularity. > > > >>>> > > > >>>> It is good idea to make the policy improvement all together > and for all rules as you mentioned. But my worries is how much load it will > be on operator side to migrate all policy rules at same time? What will be > the deprecation period etc which i think we can discuss on proposed spec - > https://review.openstack.org/#/c/547850 > > > >>> > > > >>> > > > >>> Yeah, that's another valid concern. I know at least one > operator has weighed in already. I'm curious if operators have specific > input here. > > > >>> > > > >>> It ultimately depends on if they override existing policies or > not. If a deployment doesn't have any overrides, it should be a relatively > simple change for operators to consume. > > > >>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> -gmann > > > >>>> > > > >>>> > Thanks,John > __________________________________________________________________________ > > > >>>> > OpenStack Development Mailing List (not for usage > questions) > > > >>>> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > >>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > >>>> > > > > >>>> > > > >>>> > > > >>>> > > > >>>> > __________________________________________________________________________ > > > >>>> OpenStack Development Mailing List (not for usage questions) > > > >>>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > >> > > > >> > __________________________________________________________________________ > > > >> OpenStack Development Mailing List (not for usage questions) > > > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Oct 16 15:11:30 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 16 Oct 2018 11:11:30 -0400 Subject: [Openstack-operators] Ops Meetups team meeting 2018-10-16 Message-ID: The OpenStack Ops Meetups team met today on #openstack-operators, meeting minutes linked below. As discussed previously the ops meetups team intends to arrange two ops meetups in 2019, the first aimed for February or March in Europe, the second in August or September in North America. A Call for Proposals (CFP) will be issued shortly. For those of you attending the OpenStack Summit in Berlin next month, please note we'll arrange an informal social events for openstack operators (and anyone else who wants to come) on the Tuesday night. Several of the meetups team are also moderating sessions at the forum. See you there! Chris Minutes : http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-16-14.04.html Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-16-14.04.txt Log : http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-16-14.04.log.html -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From iain.macdonnell at oracle.com Tue Oct 16 15:19:17 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Tue, 16 Oct 2018 08:19:17 -0700 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: Message-ID: Is it possible that the hostnames of the nodes changed when you updated them? e.g. maybe they were using fully-qualified names before and changed to short-form, or vice versa ? ~iain On 10/16/2018 07:22 AM, Ignazio Cassano wrote: > Hi Sylvain, > I mean launching "yum update" on compute nodes. > Now I am going to describe what happened. > We had an environment made up of 3 kvm nodes. > We added two new compute nodes. > Since the addition has been made after 3 or 4 months after the first > openstack installation, the 2 new compute nodes are updated to most > recent ocata packages. > So we launched a yum update also on the 3 old compute nodes. > After the above operations, the resource_providers table contains wrong > uuid for the 3 old nodes and they stooped to work. > Updating resource_providers uuid getting them from compute_nodes table, > the old 3 nodes return to work fine. > Regards > Ignazio > > Il giorno mar 16 ott 2018 alle ore 16:11 Sylvain Bauza > > ha scritto: > > > > On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano > > wrote: > > Hi everybody, > when on my ocata installation based on centos7 I update (only > update not  changing openstack version) some kvm compute nodes, > I diescovered uuid in resource_providers nova_api db table are > different from uuid in compute_nodes nova db table. > This causes several errors in nova-compute service, because it > not able to receive instances anymore. > Aligning uuid from compute_nodes solves this problem. > Could anyone tel me if it is a bug ? > > > What do you mean by "updating some compute nodes" ? In Nova, we > consider uniqueness of compute nodes by a tuple (host, > hypervisor_hostname) where host is your nova-compute service name > for this compute host, and hypervisor_hostname is in the case of > libvirt the 'hostname' reported by the libvirt API [1] > > If somehow one of the two values change, then the Nova Resource > Tracker will consider this new record as a separate compute node, > hereby creating a new compute_nodes table record, and then a new UUID. > Could you please check your compute_nodes table and see whether some > entries were recently created ? > > -Sylvain > > [1] > https://libvirt.org/docs/libvirt-appdev-guide-python/en-US/html/libvirt_application_development_guide_using_python-Connections-Host_Info.html > > > Regards > Ignazio > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=_TK1Um7U6rr6DWfsEbv4Rlnc21v6RU0YDRepaIogZrI&s=COsaMeTCgWBDl9EQVZB_AGikvKqCIaWcA5RY7IcLYgw&e= > From sean.mcginnis at gmx.com Tue Oct 16 15:21:11 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 16 Oct 2018 10:21:11 -0500 Subject: [Openstack-operators] Forum Schedule - Seeking Community Review In-Reply-To: <5BC4F203.4000904@openstack.org> References: <5BC4F203.4000904@openstack.org> Message-ID: <20181016152111.GA8297@sm-workstation> On Mon, Oct 15, 2018 at 03:01:07PM -0500, Jimmy McArthur wrote: > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please let me > know. > I have updated the Forum wiki page in preparation for the topic etherpads: https://wiki.openstack.org/wiki/Forum/Berlin2018 Please add your working session etherpad links once they are available so everyone has one spot to go to to find all relevant links. Thanks! Sean From mrhillsman at gmail.com Tue Oct 16 15:44:45 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Tue, 16 Oct 2018 10:44:45 -0500 Subject: [Openstack-operators] [openstack-dev] [SIGS] Ops Tools SIG In-Reply-To: References: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> <20181012122059.GB3532@sm-xps> <44fd333b-1133-7a31-93f6-ee9035383210@redhat.com> Message-ID: Additional comments in-line. I am open to restructuring things around the tools and repos that they are managed in. As previously mentioned please include me in the list of folks who want to be a part of the team. On Tue, Oct 16, 2018 at 8:45 AM Erik McCormick wrote: > On Tue, Oct 16, 2018 at 5:19 AM Miguel Angel Ajo Pelayo > wrote: > > > > Hi, > > > > Matthias and I talked this morning about this topic, and we came to > realize > > that there's room for/would be beneficial to have a common place for: > > > > a) Documentation about second day operator tools which can be > > useful with OpenStack, links to repositories or availability for > every distribution. > > > Sounds like a Natural extension to the Ops Guide [1] which we've been > working to return to relevance. I suppose this could also be a wiki > like [2], but we should at least reference it in the guide. In any > event, massive cleanup of old, outdated content really needs to be > undertaken. That should be the other part of the mission I think. > https://wiki.openstack.org/wiki/Operation_Docs_SIG > > > b) Deployment documentation/config snippets/deployment scripts for those > tools > > in integration with OpenStack. > > > > c) Operator tools and bits which are developed or maintained on > OpenStack repos, > > specially the OpenStack related bits of those tools (plugins, etc), > > > We should probably try and revive [3] and make use of that more > effectively to address b and c. We've been trying to encourage > contribution to it for years, but it needs more contributors and some > TLC > Ops Tools SIG could facilitate this. Being involved particularly here I think it would be great to have more folks involved and with the addition of OpenLab we have a space to do a bit more around e2e and integration testing of the tools. > > > d) Home the organisation of ops-related rooms during OpenStack events, > general > > ones related to OpenStack, and also the distro-specific ones for > the distros interested > > in participation. > > > I'm not exactly sure what you mean by this item. We currently have a > team responsible for meetups and pushing Ops-related content into the > Forum at Summits. Do you propose merging the Ops Meetup Team into this > SIG? > Yes, there is already the Ops Meetup Team which facilitates this and of course anyone is encouraged to join and get involved: https://wiki.openstack.org/wiki/Ops_Meetups_Team > > > > Does this scope for the SIG make sense to everyone willing to > participate? > > > > > > Best regards, > > Miguel Ángel. > > > [1] https://docs.openstack.org/operations-guide/ > [2] https://wiki.openstack.org/wiki/Operations > [3] https://wiki.openstack.org/wiki/Osops#Code > > -Erik > > > > > On Mon, Oct 15, 2018 at 11:12 AM Matthias Runge > wrote: > >> > >> On 12/10/2018 14:21, Sean McGinnis wrote: > >> > On Fri, Oct 12, 2018 at 11:25:20AM +0200, Martin Magr wrote: > >> >> Greetings guys, > >> >> > >> >> On Thu, Oct 11, 2018 at 4:19 PM, Miguel Angel Ajo Pelayo < > >> >> majopela at redhat.com> wrote: > >> >> > >> >>> Adding the mailing lists back to your reply, thank you :) > >> >>> > >> >>> I guess that +melvin.hillsman at huawei.com < > melvin.hillsman at huawei.com> can > >> >>> help us a little bit organizing the SIG, > >> >>> but I guess the first thing would be collecting a list of tools > which > >> >>> could be published > >> >>> under the umbrella of the SIG, starting by the ones already in > Osops. > >> >>> > >> >>> Publishing documentation for those tools, and the catalog under > >> >>> docs.openstack.org > >> >>> is possibly the next step (or a parallel step). > >> >>> > >> >>> > >> >>> On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister > > >> >>> wrote: > >> >>> > >> >>>> Hi Miguel, > >> >>>> > >> >>>> I would love to join this. What do I need to do? > >> >>>> > >> >>>> Sent from my iPhone > >> >>>> > >> >>>> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo < > majopela at redhat.com> > >> >>>> wrote: > >> >>>> > >> >>>> Hello > >> >>>> > >> >>>> Yesterday, during the Oslo meeting we discussed [6] the > possibility > >> >>>> of creating a new Special Interest Group [1][2] to provide home > and release > >> >>>> means for operator related tools [3] [4] [5] > >> >>>> > >> >>>> > >> >> all of those tools have python dependencies related to openstack > such as > >> >> python-openstackclient or python-pbr. Which is exactly the reason > why we > >> >> moved osops-tools-monitoring-oschecks packaging away from OpsTools > SIG to > >> >> Cloud SIG. AFAIR we had some issues of having opstools SIG being > dependent > >> >> on openstack SIG. I believe that Cloud SIG is proper home for tools > like > >> >> [3][4][5] as they are related to OpenStack anyway. OpsTools SIG > contains > >> >> general tools like fluentd, sensu, collectd. > >> >> > >> >> > >> >> Hope this helps, > >> >> Martin > >> >> > >> > > >> > Hey Martin, > >> > > >> > I'm not sure I understand the issue with these tools have > dependencies on other > >> > packages and the relationship to SIG ownership. Is your concern (or > the history > >> > of a concern you are pointing out) that the tools would have a more > difficult > >> > time if they required updates to dependencies if they are owned by a > different > >> > group? > >> > > >> > Thanks! > >> > Sean > >> > > >> > >> Hello, > >> > >> the mentioned sigs (opstools/cloud) are in CentOS scope and mention > >> repository dependencies. That shouldn't bother us here now. > >> > >> > >> There is already a SIG under the CentOS project, providing tools for > >> operators[7], but also documentation and integrational bits. > >> > >> Also, there is some overlap with other groups and SIGs, such as > >> Barometer[8]. > >> > >> Since there is already some duplication, I don't know where it makes > >> sense to have a single group for this purpose? > >> > >> If that hasn't been clear yet, I'd be absolutely interested in > >> joining/helping this effort. > >> > >> > >> Matthias > >> > >> > >> > >> [7] https://wiki.centos.org/SpecialInterestGroup/OpsTools > >> [8] https://wiki.opnfv.org/collector/pages.action?key=fastpath > >> > >> -- > >> Matthias Runge > >> > >> Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, > >> Commercial register: Amtsgericht Muenchen, HRB 153243, > >> Managing Directors: Charles Cachera, Michael Cunningham, > >> Michael O'Neill, Eric Shander > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > > > -- > > Miguel Ángel Ajo > > OSP / Networking DFG, OVN Squad Engineering > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 16 15:54:52 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 16 Oct 2018 17:54:52 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: Message-ID: hello Iain, it is not possible. I checked hostnames several times. No changes . I tried same procedure on 3 different ocata installations because we have 3 distinct openstack . Same results. Regards Ignazio Il Mar 16 Ott 2018 17:20 iain MacDonnell ha scritto: > > Is it possible that the hostnames of the nodes changed when you updated > them? e.g. maybe they were using fully-qualified names before and > changed to short-form, or vice versa ? > > ~iain > > > On 10/16/2018 07:22 AM, Ignazio Cassano wrote: > > Hi Sylvain, > > I mean launching "yum update" on compute nodes. > > Now I am going to describe what happened. > > We had an environment made up of 3 kvm nodes. > > We added two new compute nodes. > > Since the addition has been made after 3 or 4 months after the first > > openstack installation, the 2 new compute nodes are updated to most > > recent ocata packages. > > So we launched a yum update also on the 3 old compute nodes. > > After the above operations, the resource_providers table contains wrong > > uuid for the 3 old nodes and they stooped to work. > > Updating resource_providers uuid getting them from compute_nodes table, > > the old 3 nodes return to work fine. > > Regards > > Ignazio > > > > Il giorno mar 16 ott 2018 alle ore 16:11 Sylvain Bauza > > > ha scritto: > > > > > > > > On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano > > > wrote: > > > > Hi everybody, > > when on my ocata installation based on centos7 I update (only > > update not changing openstack version) some kvm compute nodes, > > I diescovered uuid in resource_providers nova_api db table are > > different from uuid in compute_nodes nova db table. > > This causes several errors in nova-compute service, because it > > not able to receive instances anymore. > > Aligning uuid from compute_nodes solves this problem. > > Could anyone tel me if it is a bug ? > > > > > > What do you mean by "updating some compute nodes" ? In Nova, we > > consider uniqueness of compute nodes by a tuple (host, > > hypervisor_hostname) where host is your nova-compute service name > > for this compute host, and hypervisor_hostname is in the case of > > libvirt the 'hostname' reported by the libvirt API [1] > > > > If somehow one of the two values change, then the Nova Resource > > Tracker will consider this new record as a separate compute node, > > hereby creating a new compute_nodes table record, and then a new > UUID. > > Could you please check your compute_nodes table and see whether some > > entries were recently created ? > > > > -Sylvain > > > > [1] > > > https://libvirt.org/docs/libvirt-appdev-guide-python/en-US/html/libvirt_application_development_guide_using_python-Connections-Host_Info.html > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__libvirt.org_docs_libvirt-2Dappdev-2Dguide-2Dpython_en-2DUS_html_libvirt-5Fapplication-5Fdevelopment-5Fguide-5Fusing-5Fpython-2DConnections-2DHost-5FInfo.html&d=DwMFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=_TK1Um7U6rr6DWfsEbv4Rlnc21v6RU0YDRepaIogZrI&s=-qYx_DDcBW_aiXp2tLBcR4pN0VZ9ZNclcx5LfIVor_E&e= > > > > > > Regards > > Ignazio > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=_TK1Um7U6rr6DWfsEbv4Rlnc21v6RU0YDRepaIogZrI&s=COsaMeTCgWBDl9EQVZB_AGikvKqCIaWcA5RY7IcLYgw&e= > > > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=_TK1Um7U6rr6DWfsEbv4Rlnc21v6RU0YDRepaIogZrI&s=COsaMeTCgWBDl9EQVZB_AGikvKqCIaWcA5RY7IcLYgw&e= > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Oct 16 17:15:16 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 16 Oct 2018 12:15:16 -0500 Subject: [Openstack-operators] [openstack-community] [openstack-dev] Forum Schedule - Seeking Community Review In-Reply-To: <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> References: <5BC4F203.4000904@openstack.org> <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> Message-ID: <5BC61CA4.2010002@openstack.org> I think you might have caught me while I was moving sessions around. This shouldn't be an issue now. Thanks for checking!! > Tim Bell > October 16, 2018 at 1:37 AM > Jimmy, > > While it's not a clash within the forum, there are two sessions for > Ironic scheduled at the same time on Tuesday at 14h20, each of which > has Julia as a speaker. > > Tim > > -----Original Message----- > From: Jimmy McArthur > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Monday, 15 October 2018 at 22:04 > To: "OpenStack Development Mailing List (not for usage questions)" > , > "OpenStack-operators at lists.openstack.org" > , > "community at lists.openstack.org" > Subject: [openstack-dev] Forum Schedule - Seeking Community Review > > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to > let us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Oct 16 17:20:37 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 16 Oct 2018 13:20:37 -0400 Subject: [Openstack-operators] Ops Meetups - Call for Hosts Message-ID: Hello all, The Ops Meetup team has embarked on a mission to revive the traditional Operators Meetup that have historically been held between Summits. With the upcoming merger of the PTG into the Summit week, and the merger of most Ops discussion sessions at Summits into the Forum, we felt that we needed to get back to our original format. With that in mind, we are beginning the process of selecting venues for both 2019 Meetups. Some guidelines for what is needed to host can be found here: https://wiki.openstack.org/wiki/Operations/Meetups#Venue_Selection Each of the etherpads below contains a template to collect information about the potential host and venue. If you are interested in hosting a meetup, simply copy and paste the template into a blank etherpad, fill it out, and place a link above the template on the original etherpad. Ops Meetup 2019 #1 - Late February / Early March - Somewhere in Europe https://etherpad.openstack.org/p/ops-meetup-venue-discuss-1st-2019 Ops Meetup 2019 #2 - Late July / Early August - Somewhere in North America https://etherpad.openstack.org/p/ops-meetup-venue-discuss-2nd-2019 Reply back to this thread with any questions or comments. If you are coming to the Berlin Summit, we will be having an Ops Meetup Team catch-up Forum session. We encourage all of you to join in making these events a success. Cheers, Erik From juliaashleykreger at gmail.com Tue Oct 16 21:44:26 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 16 Oct 2018 15:44:26 -0600 Subject: [Openstack-operators] [openstack-dev] [openstack-community] Forum Schedule - Seeking Community Review In-Reply-To: <5BC61CA4.2010002@openstack.org> References: <5BC4F203.4000904@openstack.org> <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> <5BC61CA4.2010002@openstack.org> Message-ID: Greetings Jimmy, Looks like it is still showing up on the schedule that way. I just reloaded the website page and it still has both sessions scheduled for 4:20 PM local. Sadly, I don't have cloning technology. Perhaps someone can help me with that for next year? :) -Julia On Tue, Oct 16, 2018 at 11:15 AM Jimmy McArthur wrote: > I think you might have caught me while I was moving sessions around. This > shouldn't be an issue now. > > Thanks for checking!! > > Tim Bell > October 16, 2018 at 1:37 AM > Jimmy, > > While it's not a clash within the forum, there are two sessions for Ironic > scheduled at the same time on Tuesday at 14h20, each of which has Julia as > a speaker. > > Tim > > -----Original Message----- > From: Jimmy McArthur > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > Date: Monday, 15 October 2018 at 22:04 > To: "OpenStack Development Mailing List (not for usage questions)" > , > "OpenStack-operators at lists.openstack.org" > > > , "community at lists.openstack.org" > > > Subject: [openstack-dev] Forum Schedule - Seeking Community Review > > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up ( > https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please let > me know. > > You can also view the Full Schedule in the attached PDF if that makes life > easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Oct 16 21:59:56 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 16 Oct 2018 16:59:56 -0500 Subject: [Openstack-operators] [openstack-dev] [openstack-community] Forum Schedule - Seeking Community Review In-Reply-To: References: <5BC4F203.4000904@openstack.org> <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> <5BC61CA4.2010002@openstack.org> Message-ID: <5BC65F5C.4050401@openstack.org> Doh! You seriously need it! Working on a fix :) > Julia Kreger > October 16, 2018 at 4:44 PM > Greetings Jimmy, > > Looks like it is still showing up on the schedule that way. I just > reloaded the website page and it still has both sessions scheduled for > 4:20 PM local. Sadly, I don't have cloning technology. Perhaps someone > can help me with that for next year? :) > > -Julia > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 16, 2018 at 12:15 PM > I think you might have caught me while I was moving sessions around. > This shouldn't be an issue now. > > Thanks for checking!! > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Tim Bell > October 16, 2018 at 1:37 AM > Jimmy, > > While it's not a clash within the forum, there are two sessions for > Ironic scheduled at the same time on Tuesday at 14h20, each of which > has Julia as a speaker. > > Tim > > -----Original Message----- > From: Jimmy McArthur > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Monday, 15 October 2018 at 22:04 > To: "OpenStack Development Mailing List (not for usage questions)" > , > "OpenStack-operators at lists.openstack.org" > , > "community at lists.openstack.org" > Subject: [openstack-dev] Forum Schedule - Seeking Community Review > > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to > let us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Oct 16 22:05:07 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 16 Oct 2018 17:05:07 -0500 Subject: [Openstack-operators] [openstack-dev] [openstack-community] Forum Schedule - Seeking Community Review In-Reply-To: References: <5BC4F203.4000904@openstack.org> <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> <5BC61CA4.2010002@openstack.org> Message-ID: <5BC66093.5070301@openstack.org> OK - I think I got this fixed. I had to move a couple of things around. Julia, please let me know if this all works for you: https://www.openstack.org/summit/berlin-2018/summit-schedule/global-search?t=Kreger PS - You're going to have a long week :| > Julia Kreger > October 16, 2018 at 4:44 PM > Greetings Jimmy, > > Looks like it is still showing up on the schedule that way. I just > reloaded the website page and it still has both sessions scheduled for > 4:20 PM local. Sadly, I don't have cloning technology. Perhaps someone > can help me with that for next year? :) > > -Julia > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 16, 2018 at 12:15 PM > I think you might have caught me while I was moving sessions around. > This shouldn't be an issue now. > > Thanks for checking!! > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Tim Bell > October 16, 2018 at 1:37 AM > Jimmy, > > While it's not a clash within the forum, there are two sessions for > Ironic scheduled at the same time on Tuesday at 14h20, each of which > has Julia as a speaker. > > Tim > > -----Original Message----- > From: Jimmy McArthur > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Monday, 15 October 2018 at 22:04 > To: "OpenStack Development Mailing List (not for usage questions)" > , > "OpenStack-operators at lists.openstack.org" > , > "community at lists.openstack.org" > Subject: [openstack-dev] Forum Schedule - Seeking Community Review > > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to > let us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Oct 16 22:56:04 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 16 Oct 2018 18:56:04 -0400 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: Message-ID: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> On 10/16/2018 10:11 AM, Sylvain Bauza wrote: > On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano > > wrote: > > Hi everybody, > when on my ocata installation based on centos7 I update (only update > not  changing openstack version) some kvm compute nodes, I > diescovered uuid in resource_providers nova_api db table are > different from uuid in compute_nodes nova db table. > This causes several errors in nova-compute service, because it not > able to receive instances anymore. > Aligning uuid from compute_nodes solves this problem. > Could anyone tel me if it is a bug ? > > > What do you mean by "updating some compute nodes" ? In Nova, we consider > uniqueness of compute nodes by a tuple (host, hypervisor_hostname) where > host is your nova-compute service name for this compute host, and > hypervisor_hostname is in the case of libvirt the 'hostname' reported by > the libvirt API [1] > > If somehow one of the two values change, then the Nova Resource Tracker > will consider this new record as a separate compute node, hereby > creating a new compute_nodes table record, and then a new UUID. > Could you please check your compute_nodes table and see whether some > entries were recently created ? The compute_nodes table has no unique constraint on the hypervisor_hostname field unfortunately, even though it should. It's not like you can have two compute nodes with the same hostname. But, alas, this is one of those vestigial tails in nova due to poor initial table design and coupling between the concept of a nova-compute service worker and the hypervisor resource node itself. Ignazio, I was tempted to say you may have run into this: https://bugs.launchpad.net/nova/+bug/1714248 But then I see you're not using Ironic... I'm not entirely sure how you ended up with duplicate hypervisor_hostname records for the same compute node, but some of those duplicate records must have had the deleted field set to a non-zero value, given the constraint we currently have on (host, hypervisor_hostname, deleted). This means that your deployment script or some external scripts must have been deleting compute node records somehow, though I'm not entirely sure how... Best, -jay From juliaashleykreger at gmail.com Wed Oct 17 01:09:26 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 16 Oct 2018 19:09:26 -0600 Subject: [Openstack-operators] [openstack-dev] [openstack-community] Forum Schedule - Seeking Community Review In-Reply-To: <5BC66093.5070301@openstack.org> References: <5BC4F203.4000904@openstack.org> <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> <5BC61CA4.2010002@openstack.org> <5BC66093.5070301@openstack.org> Message-ID: Looks Great, Thanks! -Julia PS - Indeed :( On Tue, Oct 16, 2018 at 4:05 PM Jimmy McArthur wrote: > OK - I think I got this fixed. I had to move a couple of things around. > Julia, please let me know if this all works for you: > > > https://www.openstack.org/summit/berlin-2018/summit-schedule/global-search?t=Kreger > > PS - You're going to have a long week :| > > Julia Kreger > October 16, 2018 at 4:44 PM > Greetings Jimmy, > > Looks like it is still showing up on the schedule that way. I just > reloaded the website page and it still has both sessions scheduled for 4:20 > PM local. Sadly, I don't have cloning technology. Perhaps someone can help > me with that for next year? :) > > -Julia > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 16, 2018 at 12:15 PM > I think you might have caught me while I was moving sessions around. This > shouldn't be an issue now. > > Thanks for checking!! > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Tim Bell > October 16, 2018 at 1:37 AM > Jimmy, > > While it's not a clash within the forum, there are two sessions for Ironic > scheduled at the same time on Tuesday at 14h20, each of which has Julia as > a speaker. > > Tim > > -----Original Message----- > From: Jimmy McArthur > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > Date: Monday, 15 October 2018 at 22:04 > To: "OpenStack Development Mailing List (not for usage questions)" > , > "OpenStack-operators at lists.openstack.org" > > > , "community at lists.openstack.org" > > > Subject: [openstack-dev] Forum Schedule - Seeking Community Review > > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up ( > https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please let > me know. > > You can also view the Full Schedule in the attached PDF if that makes life > easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Oct 17 05:41:45 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 17 Oct 2018 07:41:45 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> References: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> Message-ID: Hello Jay, when I add a New compute node I run nova-manage cell_v2 discover host . IS it possible this command update the old host uuid in resource table? Regards Ignazio Il Mer 17 Ott 2018 00:56 Jay Pipes ha scritto: > On 10/16/2018 10:11 AM, Sylvain Bauza wrote: > > On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano > > > wrote: > > > > Hi everybody, > > when on my ocata installation based on centos7 I update (only update > > not changing openstack version) some kvm compute nodes, I > > diescovered uuid in resource_providers nova_api db table are > > different from uuid in compute_nodes nova db table. > > This causes several errors in nova-compute service, because it not > > able to receive instances anymore. > > Aligning uuid from compute_nodes solves this problem. > > Could anyone tel me if it is a bug ? > > > > > > What do you mean by "updating some compute nodes" ? In Nova, we consider > > uniqueness of compute nodes by a tuple (host, hypervisor_hostname) where > > host is your nova-compute service name for this compute host, and > > hypervisor_hostname is in the case of libvirt the 'hostname' reported by > > the libvirt API [1] > > > > If somehow one of the two values change, then the Nova Resource Tracker > > will consider this new record as a separate compute node, hereby > > creating a new compute_nodes table record, and then a new UUID. > > Could you please check your compute_nodes table and see whether some > > entries were recently created ? > > The compute_nodes table has no unique constraint on the > hypervisor_hostname field unfortunately, even though it should. It's not > like you can have two compute nodes with the same hostname. But, alas, > this is one of those vestigial tails in nova due to poor initial table > design and coupling between the concept of a nova-compute service worker > and the hypervisor resource node itself. > > Ignazio, I was tempted to say you may have run into this: > > https://bugs.launchpad.net/nova/+bug/1714248 > > But then I see you're not using Ironic... I'm not entirely sure how you > ended up with duplicate hypervisor_hostname records for the same compute > node, but some of those duplicate records must have had the deleted > field set to a non-zero value, given the constraint we currently have on > (host, hypervisor_hostname, deleted). > > This means that your deployment script or some external scripts must > have been deleting compute node records somehow, though I'm not entirely > sure how... > > Best, > -jay > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Wed Oct 17 14:01:15 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 17 Oct 2018 16:01:15 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> References: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> Message-ID: On Wed, Oct 17, 2018 at 12:56 AM Jay Pipes wrote: > On 10/16/2018 10:11 AM, Sylvain Bauza wrote: > > On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano > > > wrote: > > > > Hi everybody, > > when on my ocata installation based on centos7 I update (only update > > not changing openstack version) some kvm compute nodes, I > > diescovered uuid in resource_providers nova_api db table are > > different from uuid in compute_nodes nova db table. > > This causes several errors in nova-compute service, because it not > > able to receive instances anymore. > > Aligning uuid from compute_nodes solves this problem. > > Could anyone tel me if it is a bug ? > > > > > > What do you mean by "updating some compute nodes" ? In Nova, we consider > > uniqueness of compute nodes by a tuple (host, hypervisor_hostname) where > > host is your nova-compute service name for this compute host, and > > hypervisor_hostname is in the case of libvirt the 'hostname' reported by > > the libvirt API [1] > > > > If somehow one of the two values change, then the Nova Resource Tracker > > will consider this new record as a separate compute node, hereby > > creating a new compute_nodes table record, and then a new UUID. > > Could you please check your compute_nodes table and see whether some > > entries were recently created ? > > The compute_nodes table has no unique constraint on the > hypervisor_hostname field unfortunately, even though it should. It's not > like you can have two compute nodes with the same hostname. But, alas, > this is one of those vestigial tails in nova due to poor initial table > design and coupling between the concept of a nova-compute service worker > and the hypervisor resource node itself. > > Sorry if I was unclear, but I meant we have a UK for (host, hypervisor_hostname, deleted) (I didn't explain about deleted, but meh). https://github.com/openstack/nova/blob/01c33c5/nova/db/sqlalchemy/models.py#L116-L118 But yeah, we don't have any UK for just (hypervisor_hostname, deleted), sure. Ignazio, I was tempted to say you may have run into this: > > https://bugs.launchpad.net/nova/+bug/1714248 > > But then I see you're not using Ironic... I'm not entirely sure how you > ended up with duplicate hypervisor_hostname records for the same compute > node, but some of those duplicate records must have had the deleted > field set to a non-zero value, given the constraint we currently have on > (host, hypervisor_hostname, deleted). > > This means that your deployment script or some external scripts must > have been deleting compute node records somehow, though I'm not entirely > sure how... > > Yeah that's why I asked for the compute_nodes records. Ignazio, could you please verify this ? Do you have multiple records for the same (host, hypervisor_hostname) tuple ? 'select from compute_nodes where host=XXX and hypervisor_hostname=YYY' -Sylvain Best, > -jay > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Oct 17 14:13:06 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 17 Oct 2018 16:13:06 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> Message-ID: Hello Sylvain, here the output of some selects: MariaDB [nova]> select host,hypervisor_hostname from compute_nodes; +--------------+---------------------+ | host | hypervisor_hostname | +--------------+---------------------+ | podto1-kvm01 | podto1-kvm01 | | podto1-kvm02 | podto1-kvm02 | | podto1-kvm03 | podto1-kvm03 | | podto1-kvm04 | podto1-kvm04 | | podto1-kvm05 | podto1-kvm05 | +--------------+---------------------+ MariaDB [nova]> select host from compute_nodes where host='podto1-kvm01' and hypervisor_hostname='podto1-kvm01'; +--------------+ | host | +--------------+ | podto1-kvm01 | +--------------+ Il giorno mer 17 ott 2018 alle ore 16:02 Sylvain Bauza ha scritto: > > > On Wed, Oct 17, 2018 at 12:56 AM Jay Pipes wrote: > >> On 10/16/2018 10:11 AM, Sylvain Bauza wrote: >> > On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano >> > > wrote: >> > >> > Hi everybody, >> > when on my ocata installation based on centos7 I update (only update >> > not changing openstack version) some kvm compute nodes, I >> > diescovered uuid in resource_providers nova_api db table are >> > different from uuid in compute_nodes nova db table. >> > This causes several errors in nova-compute service, because it not >> > able to receive instances anymore. >> > Aligning uuid from compute_nodes solves this problem. >> > Could anyone tel me if it is a bug ? >> > >> > >> > What do you mean by "updating some compute nodes" ? In Nova, we >> consider >> > uniqueness of compute nodes by a tuple (host, hypervisor_hostname) >> where >> > host is your nova-compute service name for this compute host, and >> > hypervisor_hostname is in the case of libvirt the 'hostname' reported >> by >> > the libvirt API [1] >> > >> > If somehow one of the two values change, then the Nova Resource Tracker >> > will consider this new record as a separate compute node, hereby >> > creating a new compute_nodes table record, and then a new UUID. >> > Could you please check your compute_nodes table and see whether some >> > entries were recently created ? >> >> The compute_nodes table has no unique constraint on the >> hypervisor_hostname field unfortunately, even though it should. It's not >> like you can have two compute nodes with the same hostname. But, alas, >> this is one of those vestigial tails in nova due to poor initial table >> design and coupling between the concept of a nova-compute service worker >> and the hypervisor resource node itself. >> >> > Sorry if I was unclear, but I meant we have a UK for (host, > hypervisor_hostname, deleted) (I didn't explain about deleted, but meh). > > https://github.com/openstack/nova/blob/01c33c5/nova/db/sqlalchemy/models.py#L116-L118 > > But yeah, we don't have any UK for just (hypervisor_hostname, deleted), > sure. > > Ignazio, I was tempted to say you may have run into this: >> >> https://bugs.launchpad.net/nova/+bug/1714248 >> >> But then I see you're not using Ironic... I'm not entirely sure how you >> ended up with duplicate hypervisor_hostname records for the same compute >> node, but some of those duplicate records must have had the deleted >> field set to a non-zero value, given the constraint we currently have on >> (host, hypervisor_hostname, deleted). >> >> This means that your deployment script or some external scripts must >> have been deleting compute node records somehow, though I'm not entirely >> sure how... >> >> > Yeah that's why I asked for the compute_nodes records. Ignazio, could you > please verify this ? > Do you have multiple records for the same (host, hypervisor_hostname) > tuple ? > > 'select from compute_nodes where host=XXX and hypervisor_hostname=YYY' > > > -Sylvain > > Best, >> -jay >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Oct 17 14:19:18 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 17 Oct 2018 10:19:18 -0400 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> Message-ID: <0c3f7540-6c1d-abf5-64f9-82f2671a00f8@gmail.com> On 10/17/2018 01:41 AM, Ignazio Cassano wrote: > Hello Jay,  when I add a New compute node I run nova-manage cell_v2 > discover host . > IS it possible this command update the old host uuid in resource table? No, not unless you already had a nova-compute installed on a host with the exact same hostname... which, from looking at the output of your SELECT from compute_nodes table, doesn't seem to be the case. In short, I think both Sylvain and I are stumped as to how your placement resource_providers table ended up with these phantom records :( -jay From mriedemos at gmail.com Wed Oct 17 14:37:20 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 17 Oct 2018 09:37:20 -0500 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> Message-ID: On 10/17/2018 9:13 AM, Ignazio Cassano wrote: > Hello Sylvain, here the output of some selects: > MariaDB [nova]> select host,hypervisor_hostname from compute_nodes; > +--------------+---------------------+ > | host         | hypervisor_hostname | > +--------------+---------------------+ > | podto1-kvm01 | podto1-kvm01        | > | podto1-kvm02 | podto1-kvm02        | > | podto1-kvm03 | podto1-kvm03        | > | podto1-kvm04 | podto1-kvm04        | > | podto1-kvm05 | podto1-kvm05        | > +--------------+---------------------+ > > MariaDB [nova]> select host from compute_nodes where host='podto1-kvm01' > and hypervisor_hostname='podto1-kvm01'; > +--------------+ > | host         | > +--------------+ > | podto1-kvm01 | > +--------------+ Does your upgrade tooling run a db archive/purge at all? It's possible that the actual services table record was deleted via the os-services REST API for some reason, which would delete the compute_nodes table record, and then a restart of the nova-compute process would recreate the services and compute_nodes table records, but with a new compute node uuid and thus a new resource provider. Maybe query your shadow_services and shadow_compute_nodes tables for "podto1-kvm01" and see if a record existed at one point, was deleted and then archived to the shadow tables. -- Thanks, Matt From ignaziocassano at gmail.com Wed Oct 17 14:37:16 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 17 Oct 2018 16:37:16 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: <0c3f7540-6c1d-abf5-64f9-82f2671a00f8@gmail.com> References: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> <0c3f7540-6c1d-abf5-64f9-82f2671a00f8@gmail.com> Message-ID: Hello, I am sure we are not using nova-compute with duplicate names. As I told previously we tried on 3 differents openstack installations and we faced the same issue. Procedure used We have an openstack with 3 compute nodes : podto1-kvm01, podto1-kvm02, podto1-kvm03 1) install a new compute node (podto1-kvm04) 2) On controller we discovered the new compute node: su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova 3) Evacuate podto1-kvm01 4) yum update on podto1-kvm01 and reboot it 5) Evacuate podto1-kvm02 6) yum update on podto1-kvm02 and reboot it 7) Evacuate podto1-kvm03 8) yum update podto1-kvm03 and reboot it Regards Il giorno mer 17 ott 2018 alle ore 16:19 Jay Pipes ha scritto: > On 10/17/2018 01:41 AM, Ignazio Cassano wrote: > > Hello Jay, when I add a New compute node I run nova-manage cell_v2 > > discover host . > > IS it possible this command update the old host uuid in resource table? > > No, not unless you already had a nova-compute installed on a host with > the exact same hostname... which, from looking at the output of your > SELECT from compute_nodes table, doesn't seem to be the case. > > In short, I think both Sylvain and I are stumped as to how your > placement resource_providers table ended up with these phantom records :( > > -jay > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Oct 17 14:44:47 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 17 Oct 2018 16:44:47 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> Message-ID: Hello, here the select you suggested: MariaDB [nova]> select * from shadow_services; Empty set (0,00 sec) MariaDB [nova]> select * from shadow_compute_nodes; Empty set (0,00 sec) As far as the upgrade tooling is concerned, we are using only yum update on old compute nodes to have same packages installed on the new compute-nodes Procedure used We have an openstack with 3 compute nodes : podto1-kvm01, podto1-kvm02, podto1-kvm03 1) install a new compute node (podto1-kvm04) 2) On controller we discovered the new compute node: su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova 3) Evacuate podto1-kvm01 4) yum update on podto1-kvm01 and reboot it 5) Evacuate podto1-kvm02 6) yum update on podto1-kvm02 and reboot it 7) Evacuate podto1-kvm03 8) yum update podto1-kvm03 and reboot it Il giorno mer 17 ott 2018 alle ore 16:37 Matt Riedemann ha scritto: > On 10/17/2018 9:13 AM, Ignazio Cassano wrote: > > Hello Sylvain, here the output of some selects: > > MariaDB [nova]> select host,hypervisor_hostname from compute_nodes; > > +--------------+---------------------+ > > | host | hypervisor_hostname | > > +--------------+---------------------+ > > | podto1-kvm01 | podto1-kvm01 | > > | podto1-kvm02 | podto1-kvm02 | > > | podto1-kvm03 | podto1-kvm03 | > > | podto1-kvm04 | podto1-kvm04 | > > | podto1-kvm05 | podto1-kvm05 | > > +--------------+---------------------+ > > > > MariaDB [nova]> select host from compute_nodes where host='podto1-kvm01' > > and hypervisor_hostname='podto1-kvm01'; > > +--------------+ > > | host | > > +--------------+ > > | podto1-kvm01 | > > +--------------+ > > Does your upgrade tooling run a db archive/purge at all? It's possible > that the actual services table record was deleted via the os-services > REST API for some reason, which would delete the compute_nodes table > record, and then a restart of the nova-compute process would recreate > the services and compute_nodes table records, but with a new compute > node uuid and thus a new resource provider. > > Maybe query your shadow_services and shadow_compute_nodes tables for > "podto1-kvm01" and see if a record existed at one point, was deleted and > then archived to the shadow tables. > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Oct 17 15:41:36 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 17 Oct 2018 10:41:36 -0500 Subject: [Openstack-operators] [openstack-dev] [horizon][nova][cinder][keystone][glance][neutron][swift] Horizon feature gaps In-Reply-To: References: Message-ID: <45d48394-4605-5ea4-f14d-48c1422b54cc@gmail.com> On 10/17/2018 9:24 AM, Ivan Kolodyazhny wrote: > > As you may know, unfortunately, Horizon doesn't support all features > provided by APIs. That's why we created feature gaps list [1]. > > I'd got a lot of great conversations with projects teams during the PTG > and we tried to figure out what should be done prioritize these tasks. > It's really helpful for Horizon to get feedback from other teams to > understand what features should be implemented next. > > While I'm filling launchpad with new bugs and blueprints for [1], it > would be good to review this list again and find some volunteers to > decrease feature gaps. > > [1] https://etherpad.openstack.org/p/horizon-feature-gap > > Thanks everybody for any of your contributions to Horizon. +openstack-sigs +openstack-operators I've left some notes for nova. This looks very similar to the compute API OSC gap analysis I did [1]. Unfortunately it's hard to prioritize what to really work on without some user/operator feedback - maybe we can get the user work group involved in trying to help prioritize what people really want that is missing from horizon, at least for compute? [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc -- Thanks, Matt From michael.d.moore at nasa.gov Wed Oct 17 19:29:13 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Wed, 17 Oct 2018 19:29:13 +0000 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> All, I’m seeing unexpected behavior in our Queens environment related to Glance image visibility. Specifically users who, based on my understanding of the visibility and ownership fields, should NOT be able to see or view the image. If I create a new image with openstack image create and specify –project and –private a non-admin user in a different tenant can see and boot that image. That seems to be the opposite of what should happen. Any ideas? Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud GITISS Contract Business Integra Inc. NASA Goddard Space Flight Center Michael.D.Moore at nasa.gov www.BusinessIntegra.com Hydrogen fusion brightens my day. -------------- next part -------------- An HTML attachment was scrubbed... URL: From iain.macdonnell at oracle.com Thu Oct 18 05:01:09 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Wed, 17 Oct 2018 22:01:09 -0700 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> References: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> Message-ID: On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I’m seeing unexpected behavior in our Queens environment related to > Glance image visibility. Specifically users who, based on my > understanding of the visibility and ownership fields, should NOT be able > to see or view the image. > > If I create a new image with openstack image create and specify –project > and –private a non-admin user in a different tenant can see and > boot that image. > > That seems to be the opposite of what should happen. Any ideas? Yep, something's not right there. Are you sure that the user that can see the image doesn't have the admin role (for the project in its keystone token) ? Did you verify that the image's owner is what you intended, and that the visibility really is "private" ? ~iain From tony at bakeyournoodle.com Thu Oct 18 06:35:39 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 18 Oct 2018 17:35:39 +1100 Subject: [Openstack-operators] [all] Naming the T release of OpenStack Message-ID: <20181018063539.GC6589@thor.bakeyournoodle.com> Hello all, As per [1] the nomination period for names for the T release have now closed (actually 3 days ago sorry). The nominated names and any qualifying remarks can be seen at2]. Proposed Names * Tarryall * Teakettle * Teller * Telluride * Thomas * Thornton * Tiger * Tincup * Timnath * Timber * Tiny Town * Torreys * Trail * Trinidad * Treasure * Troublesome * Trussville * Turret * Tyrone Proposed Names that do not meet the criteria * Train However I'd like to suggest we skip the CIVS poll and select 'Train' as the release name by TC resolution[3]. My think for this is * It's fun and celebrates a humorous moment in our community * As a developer I've heard the T release called Train for quite sometime, and was used often at the PTG[4]. * As the *next* PTG is also in Colorado we can still choose a geographic based name for U[5] * If train causes a problem for trademark reasons then we can always run the poll I'll leave[3] for marked -W for a week for discussion to happen before the TC can consider / vote on it. Yours Tony. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals [3] https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 [4] https://twitter.com/vkmc/status/1040321043959754752 [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From sbauza at redhat.com Thu Oct 18 08:24:26 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 18 Oct 2018 10:24:26 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> Message-ID: On Wed, Oct 17, 2018 at 4:46 PM Ignazio Cassano wrote: > Hello, here the select you suggested: > > MariaDB [nova]> select * from shadow_services; > Empty set (0,00 sec) > > MariaDB [nova]> select * from shadow_compute_nodes; > Empty set (0,00 sec) > > As far as the upgrade tooling is concerned, we are using only yum update > on old compute nodes to have same packages installed on the new > compute-nodes > Well, to be honest, I was looking at some other bug for OSP https://bugzilla.redhat.com/show_bug.cgi?id=1636463 which is pretty identical so you're not alone :-) For some reason, yum update modifies something in the DB that I don't know yet. Which exact packages are you using ? RDO ones ? I marked the downstream bug as NOTABUG since I wasn't able to reproduce it and given I also provided a SQL query for fixing it, but maybe we should try to see which specific package has a problem... -Sylvain Procedure used > We have an openstack with 3 compute nodes : podto1-kvm01, podto1-kvm02, > podto1-kvm03 > 1) install a new compute node (podto1-kvm04) > 2) On controller we discovered the new compute node: su -s /bin/sh -c > "nova-manage cell_v2 discover_hosts --verbose" nova > 3) Evacuate podto1-kvm01 > 4) yum update on podto1-kvm01 and reboot it > 5) Evacuate podto1-kvm02 > 6) yum update on podto1-kvm02 and reboot it > 7) Evacuate podto1-kvm03 > 8) yum update podto1-kvm03 and reboot it > > > > Il giorno mer 17 ott 2018 alle ore 16:37 Matt Riedemann < > mriedemos at gmail.com> ha scritto: > >> On 10/17/2018 9:13 AM, Ignazio Cassano wrote: >> > Hello Sylvain, here the output of some selects: >> > MariaDB [nova]> select host,hypervisor_hostname from compute_nodes; >> > +--------------+---------------------+ >> > | host | hypervisor_hostname | >> > +--------------+---------------------+ >> > | podto1-kvm01 | podto1-kvm01 | >> > | podto1-kvm02 | podto1-kvm02 | >> > | podto1-kvm03 | podto1-kvm03 | >> > | podto1-kvm04 | podto1-kvm04 | >> > | podto1-kvm05 | podto1-kvm05 | >> > +--------------+---------------------+ >> > >> > MariaDB [nova]> select host from compute_nodes where >> host='podto1-kvm01' >> > and hypervisor_hostname='podto1-kvm01'; >> > +--------------+ >> > | host | >> > +--------------+ >> > | podto1-kvm01 | >> > +--------------+ >> >> Does your upgrade tooling run a db archive/purge at all? It's possible >> that the actual services table record was deleted via the os-services >> REST API for some reason, which would delete the compute_nodes table >> record, and then a restart of the nova-compute process would recreate >> the services and compute_nodes table records, but with a new compute >> node uuid and thus a new resource provider. >> >> Maybe query your shadow_services and shadow_compute_nodes tables for >> "podto1-kvm01" and see if a record existed at one point, was deleted and >> then archived to the shadow tables. >> >> -- >> >> Thanks, >> >> Matt >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Oct 18 12:21:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 18 Oct 2018 07:21:57 -0500 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] [horizon][nova][cinder][keystone][glance][neutron][swift] Horizon feature gaps In-Reply-To: <45d48394-4605-5ea4-f14d-48c1422b54cc@gmail.com> References: <45d48394-4605-5ea4-f14d-48c1422b54cc@gmail.com> Message-ID: <20181018122157.GA3125@sm-workstation> On Wed, Oct 17, 2018 at 10:41:36AM -0500, Matt Riedemann wrote: > On 10/17/2018 9:24 AM, Ivan Kolodyazhny wrote: > > > > As you may know, unfortunately, Horizon doesn't support all features > > provided by APIs. That's why we created feature gaps list [1]. > > > > I'd got a lot of great conversations with projects teams during the PTG > > and we tried to figure out what should be done prioritize these tasks. > > It's really helpful for Horizon to get feedback from other teams to > > understand what features should be implemented next. > > > > While I'm filling launchpad with new bugs and blueprints for [1], it > > would be good to review this list again and find some volunteers to > > decrease feature gaps. > > > > [1] https://etherpad.openstack.org/p/horizon-feature-gap > > > > Thanks everybody for any of your contributions to Horizon. > > +openstack-sigs > +openstack-operators > > I've left some notes for nova. This looks very similar to the compute API > OSC gap analysis I did [1]. Unfortunately it's hard to prioritize what to > really work on without some user/operator feedback - maybe we can get the > user work group involved in trying to help prioritize what people really > want that is missing from horizon, at least for compute? > > [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc > > -- > > Thanks, > > Matt I also have a cinderclient OSC gap analysis I've started working on. It might be useful to add a Horizon column to this list too. https://ethercalc.openstack.org/cinderclient-osc-gap Sean From anteaya at anteaya.info Thu Oct 18 15:31:36 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 18 Oct 2018 11:31:36 -0400 Subject: [Openstack-operators] [all] Naming the T release of OpenStack In-Reply-To: <20181018063539.GC6589@thor.bakeyournoodle.com> References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: <3ce7439b-292d-879d-96a0-1a5bc7f1c748@anteaya.info> On 2018-10-18 2:35 a.m., Tony Breeds wrote: > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train > > However I'd like to suggest we skip the CIVS poll and select 'Train' as > the release name by TC resolution[3]. My think for this is > > * It's fun and celebrates a humorous moment in our community > * As a developer I've heard the T release called Train for quite > sometime, and was used often at the PTG[4]. > * As the *next* PTG is also in Colorado we can still choose a > geographic based name for U[5] > * If train causes a problem for trademark reasons then we can always > run the poll > > I'll leave[3] for marked -W for a week for discussion to happen before the > TC can consider / vote on it. > > Yours Tony. > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html > [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals > [3] https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 > [4] https://twitter.com/vkmc/status/1040321043959754752 > [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z I stand in opposition to any action that further undermines democracy. I have avoided events in Denver lately for this reason. If the support for Train is as universal as is portrayed, the poll with show us that. I don't care what the name is. I do want to participate in the selection. The method of participating has heretofore been a poll. I have seen no convincing argument to abandon the use of a poll now. I stand for what democracy there remains. I would like to participate in a poll. Thank you, Anita From openstack at medberry.net Thu Oct 18 15:39:13 2018 From: openstack at medberry.net (David Medberry) Date: Thu, 18 Oct 2018 09:39:13 -0600 Subject: [Openstack-operators] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: <20181018063539.GC6589@thor.bakeyournoodle.com> References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: I'm fine with Train but I'm also fine with just adding it to the list and voting on it. It will win. Also, for those not familiar with the debian/ubuntu command "sl", now is the time to become so. apt install sl sl -Flea #ftw On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds wrote: > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train > > However I'd like to suggest we skip the CIVS poll and select 'Train' as > the release name by TC resolution[3]. My think for this is > > * It's fun and celebrates a humorous moment in our community > * As a developer I've heard the T release called Train for quite > sometime, and was used often at the PTG[4]. > * As the *next* PTG is also in Colorado we can still choose a > geographic based name for U[5] > * If train causes a problem for trademark reasons then we can always > run the poll > > I'll leave[3] for marked -W for a week for discussion to happen before the > TC can consider / vote on it. > > Yours Tony. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html > [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals > [3] > https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 > [4] https://twitter.com/vkmc/status/1040321043959754752 > [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Thu Oct 18 15:41:16 2018 From: openstack at medberry.net (David Medberry) Date: Thu, 18 Oct 2018 09:41:16 -0600 Subject: [Openstack-operators] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: and any talks I give in Denver (Forum, Ops, Main) will include "sl". It's handy in a variety of ways. On Thu, Oct 18, 2018 at 9:39 AM David Medberry wrote: > I'm fine with Train but I'm also fine with just adding it to the list and > voting on it. It will win. > > Also, for those not familiar with the debian/ubuntu command "sl", now is > the time to become so. > > apt install sl > sl -Flea #ftw > > On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds > wrote: > >> Hello all, >> As per [1] the nomination period for names for the T release have >> now closed (actually 3 days ago sorry). The nominated names and any >> qualifying remarks can be seen at2]. >> >> Proposed Names >> * Tarryall >> * Teakettle >> * Teller >> * Telluride >> * Thomas >> * Thornton >> * Tiger >> * Tincup >> * Timnath >> * Timber >> * Tiny Town >> * Torreys >> * Trail >> * Trinidad >> * Treasure >> * Troublesome >> * Trussville >> * Turret >> * Tyrone >> >> Proposed Names that do not meet the criteria >> * Train >> >> However I'd like to suggest we skip the CIVS poll and select 'Train' as >> the release name by TC resolution[3]. My think for this is >> >> * It's fun and celebrates a humorous moment in our community >> * As a developer I've heard the T release called Train for quite >> sometime, and was used often at the PTG[4]. >> * As the *next* PTG is also in Colorado we can still choose a >> geographic based name for U[5] >> * If train causes a problem for trademark reasons then we can always >> run the poll >> >> I'll leave[3] for marked -W for a week for discussion to happen before the >> TC can consider / vote on it. >> >> Yours Tony. >> >> [1] >> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html >> [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals >> [3] >> https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 >> [4] https://twitter.com/vkmc/status/1040321043959754752 >> [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Oct 18 15:43:23 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 18 Oct 2018 10:43:23 -0500 Subject: [Openstack-operators] [all] Naming the T release of OpenStack In-Reply-To: <3ce7439b-292d-879d-96a0-1a5bc7f1c748@anteaya.info> References: <20181018063539.GC6589@thor.bakeyournoodle.com> <3ce7439b-292d-879d-96a0-1a5bc7f1c748@anteaya.info> Message-ID: I agree with Anita and wonder why Train did not meet the criteria? If there is no way for Train to be an option outside of killing the voting, than for the sake of integrity of processes which I have heard quite a few people hold close to we should drop Train from the list. It is an unfortunate thing in my view because I am actually a "non-developer" who agreed during the feedback session that Train would be a great name but Anita is right on this one imho. On Thu, Oct 18, 2018 at 10:32 AM Anita Kuno wrote: > On 2018-10-18 2:35 a.m., Tony Breeds wrote: > > Hello all, > > As per [1] the nomination period for names for the T release have > > now closed (actually 3 days ago sorry). The nominated names and any > > qualifying remarks can be seen at2]. > > > > Proposed Names > > * Tarryall > > * Teakettle > > * Teller > > * Telluride > > * Thomas > > * Thornton > > * Tiger > > * Tincup > > * Timnath > > * Timber > > * Tiny Town > > * Torreys > > * Trail > > * Trinidad > > * Treasure > > * Troublesome > > * Trussville > > * Turret > > * Tyrone > > > > Proposed Names that do not meet the criteria > > * Train > > > > However I'd like to suggest we skip the CIVS poll and select 'Train' as > > the release name by TC resolution[3]. My think for this is > > > > * It's fun and celebrates a humorous moment in our community > > * As a developer I've heard the T release called Train for quite > > sometime, and was used often at the PTG[4]. > > * As the *next* PTG is also in Colorado we can still choose a > > geographic based name for U[5] > > * If train causes a problem for trademark reasons then we can always > > run the poll > > > > I'll leave[3] for marked -W for a week for discussion to happen before > the > > TC can consider / vote on it. > > > > Yours Tony. > > > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html > > [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals > > [3] > https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 > > [4] https://twitter.com/vkmc/status/1040321043959754752 > > [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z > > I stand in opposition to any action that further undermines democracy. > > I have avoided events in Denver lately for this reason. > > If the support for Train is as universal as is portrayed, the poll with > show us that. > > I don't care what the name is. I do want to participate in the > selection. The method of participating has heretofore been a poll. I > have seen no convincing argument to abandon the use of a poll now. > > I stand for what democracy there remains. I would like to participate in > a poll. > > Thank you, Anita > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Oct 18 15:46:12 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 18 Oct 2018 23:46:12 +0800 Subject: [Openstack-operators] [all] Naming the T release of OpenStack In-Reply-To: References: <20181018063539.GC6589@thor.bakeyournoodle.com> <3ce7439b-292d-879d-96a0-1a5bc7f1c748@anteaya.info> Message-ID: Just do a vote as usual, train is a great candidate :) On Thu, Oct 18, 2018 at 11:44 PM Melvin Hillsman wrote: > I agree with Anita and wonder why Train did not meet the criteria? If > there is no way for Train to be an option outside of killing the voting, > than for the sake of integrity of processes which I have heard quite a few > people hold close to we should drop Train from the list. It is an > unfortunate thing in my view because I am actually a "non-developer" who > agreed during the feedback session that Train would be a great name but > Anita is right on this one imho. > > On Thu, Oct 18, 2018 at 10:32 AM Anita Kuno wrote: > >> On 2018-10-18 2:35 a.m., Tony Breeds wrote: >> > Hello all, >> > As per [1] the nomination period for names for the T release have >> > now closed (actually 3 days ago sorry). The nominated names and any >> > qualifying remarks can be seen at2]. >> > >> > Proposed Names >> > * Tarryall >> > * Teakettle >> > * Teller >> > * Telluride >> > * Thomas >> > * Thornton >> > * Tiger >> > * Tincup >> > * Timnath >> > * Timber >> > * Tiny Town >> > * Torreys >> > * Trail >> > * Trinidad >> > * Treasure >> > * Troublesome >> > * Trussville >> > * Turret >> > * Tyrone >> > >> > Proposed Names that do not meet the criteria >> > * Train >> > >> > However I'd like to suggest we skip the CIVS poll and select 'Train' as >> > the release name by TC resolution[3]. My think for this is >> > >> > * It's fun and celebrates a humorous moment in our community >> > * As a developer I've heard the T release called Train for quite >> > sometime, and was used often at the PTG[4]. >> > * As the *next* PTG is also in Colorado we can still choose a >> > geographic based name for U[5] >> > * If train causes a problem for trademark reasons then we can always >> > run the poll >> > >> > I'll leave[3] for marked -W for a week for discussion to happen before >> the >> > TC can consider / vote on it. >> > >> > Yours Tony. >> > >> > [1] >> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html >> > [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals >> > [3] >> https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 >> > [4] https://twitter.com/vkmc/status/1040321043959754752 >> > [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z >> >> I stand in opposition to any action that further undermines democracy. >> >> I have avoided events in Denver lately for this reason. >> >> If the support for Train is as universal as is portrayed, the poll with >> show us that. >> >> I don't care what the name is. I do want to participate in the >> selection. The method of participating has heretofore been a poll. I >> have seen no convincing argument to abandon the use of a poll now. >> >> I stand for what democracy there remains. I would like to participate in >> a poll. >> >> Thank you, Anita >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From iain.macdonnell at oracle.com Thu Oct 18 15:47:45 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Thu, 18 Oct 2018 08:47:45 -0700 Subject: [Openstack-operators] [all] Naming the T release of OpenStack In-Reply-To: <3ce7439b-292d-879d-96a0-1a5bc7f1c748@anteaya.info> References: <20181018063539.GC6589@thor.bakeyournoodle.com> <3ce7439b-292d-879d-96a0-1a5bc7f1c748@anteaya.info> Message-ID: <54680329-434b-3c9f-6005-1f4d75f61328@oracle.com> On 10/18/2018 08:31 AM, Anita Kuno wrote: > On 2018-10-18 2:35 a.m., Tony Breeds wrote: ... >> However I'd like to suggest we skip the CIVS poll and select 'Train' as >> the release name by TC resolution[3].  My think for this is >> >>   * It's fun and celebrates a humorous moment in our community >>   * As a developer I've heard the T release called Train for quite >>     sometime, and was used often at the PTG[4]. >>   * As the *next* PTG is also in Colorado we can still choose a >>     geographic based name for U[5] >>   * If train causes a problem for trademark reasons then we can always >>     run the poll >> >> I'll leave[3] for marked -W for a week for discussion to happen before >> the TC can consider / vote on it. > > I stand in opposition to any action that further undermines democracy. > > I have avoided events in Denver lately for this reason. > > If the support for Train is as universal as is portrayed, the poll with > show us that. > > I don't care what the name is. I do want to participate in the > selection. The method of participating has heretofore been a poll. I > have seen no convincing argument to abandon the use of a poll now. > > I stand for what democracy there remains. I would like to participate in > a poll. +1 ... and if you're not going to use the nominations, don't waste people's time asking for submissions. I'm not super-fond of Train, since "release train" means something (else), at least in some contexts... ~iain From gael.therond at gmail.com Thu Oct 18 15:51:09 2018 From: gael.therond at gmail.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Thu, 18 Oct 2018 17:51:09 +0200 Subject: [Openstack-operators] [OCTAVIA][QUEENS][KOLLA] - network/subnet not found. Message-ID: Hi guys, I'm back to business with Octavia after a long time but I'm facing an issue that seems a little bit tricky. When trying to create a LB using either APIs (cURL/postman) calls or openstack-client the request finish with an error such as: `Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400)` If I put my client or the Octavia api in DEBUG mode, I found out neutron to correctly sending back to him a RESP BODY with the requested network/subnet in it. Here is the stacktrace that I get from both, Openstack client and the Octavia API logs: ``` POST call to https://api-emea-west-az1.cloud.inkdrop.sh:9876/v2.0/lbaas/loadbalancers used request id req-2f929192-4e60-491b-b65d-3a7bef43e978 Request returned failure status: 400 Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) Traceback (most recent call last): File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 29, in wrapper response = func(*args, **kwargs) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 92, in load_balancer_create response = self.create(url, **params) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 164, in create ret = self._request(method, url, session=session, **params) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 141, in _request return session.request(url, method, **kwargs) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/keystoneauth1/session.py", line 869, in request raise exceptions.from_response(resp, method, url) keystoneauth1.exceptions.http.BadRequest: Bad Request (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 402, in run_subcommand result = cmd.run(parsed_args) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/command/command.py", line 41, in run return super(Command, self).run(parsed_args) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/display.py", line 116, in run column_names, data = self.take_action(parsed_args) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/osc/v2/load_balancer.py", line 121, in take_action json=body) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 38, in wrapper request_id=e.request_id) octaviaclient.api.v2.octavia.OctaviaClientException: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) clean_up CreateLoadBalancer: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) Traceback (most recent call last): File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 29, in wrapper response = func(*args, **kwargs) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 92, in load_balancer_create response = self.create(url, **params) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 164, in create ret = self._request(method, url, session=session, **params) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 141, in _request return session.request(url, method, **kwargs) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/keystoneauth1/session.py", line 869, in request raise exceptions.from_response(resp, method, url) keystoneauth1.exceptions.http.BadRequest: Bad Request (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/shell.py", line 135, in run ret_val = super(OpenStackShell, self).run(argv) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 281, in run result = self.run_subcommand(remainder) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/shell.py", line 175, in run_subcommand ret_value = super(OpenStackShell, self).run_subcommand(argv) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 402, in run_subcommand result = cmd.run(parsed_args) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/command/command.py", line 41, in run return super(Command, self).run(parsed_args) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/display.py", line 116, in run column_names, data = self.take_action(parsed_args) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/osc/v2/load_balancer.py", line 121, in take_action json=body) File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 38, in wrapper request_id=e.request_id) octaviaclient.api.v2.octavia.OctaviaClientException: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) END return value: 1 ``` I'm using the following openstack clients and libraries: ``` keystoneauth1==3.11.0 kolla-ansible==6.1.0 openstacksdk==0.17.2 os-client-config==1.31.2 os-service-types==1.3.0 osc-lib==1.11.1 oslo.config==6.5.1 oslo.context==2.21.0 oslo.i18n==3.22.1 oslo.log==3.40.1 oslo.serialization==2.28.1 oslo.utils==3.37.0 python-cinderclient==4.1.0 python-dateutil==2.7.3 python-glanceclient==2.12.1 python-keystoneclient==3.17.0 python-neutronclient==6.10.0 python-novaclient==11.0.0 python-octaviaclient==1.6.0 python-openstackclient==3.16.1 ``` If on the same virtualenv I'm doing: `openstack --os-cloud ic-emea-west-az0 --os-region ic-emea-west-az1 network list` I correctly get my requested network/subnet id. I'm using Kolla to deploy octavia and get the exact same issue with all the kolla 6.0.0 to 6.1.0 serie. If anyone have an idea, I'm all in ^^ -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu Oct 18 16:43:33 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 18 Oct 2018 11:43:33 -0500 Subject: [Openstack-operators] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: <4b02964cfa8a43b29a11d98a6606ccd6@AUSX13MPS308.AMER.DELL.COM> References: <20181018063539.GC6589@thor.bakeyournoodle.com> <4b02964cfa8a43b29a11d98a6606ccd6@AUSX13MPS308.AMER.DELL.COM> Message-ID: <6befdfa3-66ef-2e2d-1d9a-b1ba01f19b79@fried.cc> Sorry, I'm opposed to this idea. I admit I don't understand the political framework, nor have I read the governing documents beyond [1], but that document makes it clear that this is supposed to be a community-wide vote. Is it really legal for the TC (or whoever has merge rights on [2]) to merge a patch that gives that same body the power to take the decision out of the hands of the community? So it's really an oligarchy that gives its constituency the illusion of democracy until something comes up that it feels like not having a vote on? The fact that it's something relatively "unimportant" (this time) is not a comfort. Not that I think the TC would necessarily move forward with [2] in the face of substantial opposition from non-TC "cores" or whatever. I will vote enthusiastically for "Train". But a vote it should be. -efried [1] https://governance.openstack.org/tc/reference/release-naming.html [2] https://review.openstack.org/#/c/611511/ On 10/18/2018 10:52 AM, Arkady.Kanevsky at dell.com wrote: > +1 for the poll. > > Let’s follow well established process. > > If we want to add Train as one of the options for the name I am OK with it. > >   > > *From:* Jonathan Mills > *Sent:* Thursday, October 18, 2018 10:49 AM > *To:* openstack-sigs at lists.openstack.org > *Subject:* Re: [Openstack-sigs] [all] Naming the T release of OpenStack > >   > > [EXTERNAL EMAIL] > Please report any suspicious attachments, links, or requests for > sensitive information. > > +1 for just having a poll > >   > > On Thu, Oct 18, 2018 at 11:39 AM David Medberry > wrote: > > I'm fine with Train but I'm also fine with just adding it to the > list and voting on it. It will win. > >   > > Also, for those not familiar with the debian/ubuntu command "sl", > now is the time to become so. > >   > > apt install sl > > sl -Flea #ftw > >   > > On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds > > wrote: > > Hello all, >     As per [1] the nomination period for names for the T release > have > now closed (actually 3 days ago sorry).  The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names >  * Tarryall >  * Teakettle >  * Teller >  * Telluride >  * Thomas >  * Thornton >  * Tiger >  * Tincup >  * Timnath >  * Timber >  * Tiny Town >  * Torreys >  * Trail >  * Trinidad >  * Treasure >  * Troublesome >  * Trussville >  * Turret >  * Tyrone > > Proposed Names that do not meet the criteria >  * Train > > However I'd like to suggest we skip the CIVS poll and select > 'Train' as > the release name by TC resolution[3].  My think for this is > >  * It's fun and celebrates a humorous moment in our community >  * As a developer I've heard the T release called Train for quite >    sometime, and was used often at the PTG[4]. >  * As the *next* PTG is also in Colorado we can still choose a >    geographic based name for U[5] >  * If train causes a problem for trademark reasons then we can > always >    run the poll > > I'll leave[3] for marked -W for a week for discussion to happen > before the > TC can consider / vote on it. > > Yours Tony. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html > [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals > [3] > https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 > [4] https://twitter.com/vkmc/status/1040321043959754752 > [5] > https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > From michael.d.moore at nasa.gov Thu Oct 18 18:24:36 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Thu, 18 Oct 2018 18:24:36 +0000 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: References: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> Message-ID: Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I’m seeing unexpected behavior in our Queens environment related to > Glance image visibility. Specifically users who, based on my > understanding of the visibility and ownership fields, should NOT be able > to see or view the image. > > If I create a new image with openstack image create and specify –project > and –private a non-admin user in a different tenant can see and > boot that image. > > That seems to be the opposite of what should happen. Any ideas? Yep, something's not right there. Are you sure that the user that can see the image doesn't have the admin role (for the project in its keystone token) ? Did you verify that the image's owner is what you intended, and that the visibility really is "private" ? ~iain _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From ignaziocassano at gmail.com Thu Oct 18 19:00:37 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 18 Oct 2018 21:00:37 +0200 Subject: [Openstack-operators] nova_api resource_providers table issues on ocata In-Reply-To: References: <39f242e4-a13e-f184-2e37-c4618dae713a@gmail.com> Message-ID: Hello, sorry for late in my answer.... the following is the content of my ocata repo file: [centos-openstack-ocata] name=CentOS-7 - OpenStack ocata baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-ocata/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud exclude=sip,PyQt4 Epel is not enable as suggested in documentation- Regards Ignazio Il giorno gio 18 ott 2018 alle ore 10:24 Sylvain Bauza ha scritto: > > > On Wed, Oct 17, 2018 at 4:46 PM Ignazio Cassano > wrote: > >> Hello, here the select you suggested: >> >> MariaDB [nova]> select * from shadow_services; >> Empty set (0,00 sec) >> >> MariaDB [nova]> select * from shadow_compute_nodes; >> Empty set (0,00 sec) >> >> As far as the upgrade tooling is concerned, we are using only yum update >> on old compute nodes to have same packages installed on the new >> compute-nodes >> > > > Well, to be honest, I was looking at some other bug for OSP > https://bugzilla.redhat.com/show_bug.cgi?id=1636463 which is pretty > identical so you're not alone :-) > For some reason, yum update modifies something in the DB that I don't know > yet. Which exact packages are you using ? RDO ones ? > > I marked the downstream bug as NOTABUG since I wasn't able to reproduce it > and given I also provided a SQL query for fixing it, but maybe we should > try to see which specific package has a problem... > > -Sylvain > > > Procedure used >> We have an openstack with 3 compute nodes : podto1-kvm01, podto1-kvm02, >> podto1-kvm03 >> 1) install a new compute node (podto1-kvm04) >> 2) On controller we discovered the new compute node: su -s /bin/sh -c >> "nova-manage cell_v2 discover_hosts --verbose" nova >> 3) Evacuate podto1-kvm01 >> 4) yum update on podto1-kvm01 and reboot it >> 5) Evacuate podto1-kvm02 >> 6) yum update on podto1-kvm02 and reboot it >> 7) Evacuate podto1-kvm03 >> 8) yum update podto1-kvm03 and reboot it >> >> >> >> Il giorno mer 17 ott 2018 alle ore 16:37 Matt Riedemann < >> mriedemos at gmail.com> ha scritto: >> >>> On 10/17/2018 9:13 AM, Ignazio Cassano wrote: >>> > Hello Sylvain, here the output of some selects: >>> > MariaDB [nova]> select host,hypervisor_hostname from compute_nodes; >>> > +--------------+---------------------+ >>> > | host | hypervisor_hostname | >>> > +--------------+---------------------+ >>> > | podto1-kvm01 | podto1-kvm01 | >>> > | podto1-kvm02 | podto1-kvm02 | >>> > | podto1-kvm03 | podto1-kvm03 | >>> > | podto1-kvm04 | podto1-kvm04 | >>> > | podto1-kvm05 | podto1-kvm05 | >>> > +--------------+---------------------+ >>> > >>> > MariaDB [nova]> select host from compute_nodes where >>> host='podto1-kvm01' >>> > and hypervisor_hostname='podto1-kvm01'; >>> > +--------------+ >>> > | host | >>> > +--------------+ >>> > | podto1-kvm01 | >>> > +--------------+ >>> >>> Does your upgrade tooling run a db archive/purge at all? It's possible >>> that the actual services table record was deleted via the os-services >>> REST API for some reason, which would delete the compute_nodes table >>> record, and then a restart of the nova-compute process would recreate >>> the services and compute_nodes table records, but with a new compute >>> node uuid and thus a new resource provider. >>> >>> Maybe query your shadow_services and shadow_compute_nodes tables for >>> "podto1-kvm01" and see if a record existed at one point, was deleted and >>> then archived to the shadow tables. >>> >>> -- >>> >>> Thanks, >>> >>> Matt >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu Oct 18 19:19:50 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 18 Oct 2018 12:19:50 -0700 Subject: [Openstack-operators] [OCTAVIA][QUEENS][KOLLA] - network/subnet not found. In-Reply-To: References: Message-ID: Hi there. I'm not sure what is happening there and I don't use kolla, so I need to ask a few more questions. Is that network ID being used for the VIP or the lb-mgmt-net? Any chance you can provide a debug log paste from the API process for this request? Basically it is saying that network ID is invalid for the user making the request. This can happen if the user token being used doesn't have access to that network, as an example. It could also be a permissions issue with the service auth account being used for the Octavia API, but this is unlikely. Michael On Thu, Oct 18, 2018 at 8:51 AM Gaël THEROND wrote: > > Hi guys, > > I'm back to business with Octavia after a long time but I'm facing an issue that seems a little bit tricky. > > When trying to create a LB using either APIs (cURL/postman) calls or openstack-client the request finish with an error such as: > > `Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400)` > > If I put my client or the Octavia api in DEBUG mode, I found out neutron to correctly sending back to him a RESP BODY with the requested network/subnet in it. > > Here is the stacktrace that I get from both, Openstack client and the Octavia API logs: > > ``` > POST call to https://api-emea-west-az1.cloud.inkdrop.sh:9876/v2.0/lbaas/loadbalancers used request id req-2f929192-4e60-491b-b65d-3a7bef43e978 > Request returned failure status: 400 > Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) > Traceback (most recent call last): > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 29, in wrapper > response = func(*args, **kwargs) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 92, in load_balancer_create > response = self.create(url, **params) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 164, in create > ret = self._request(method, url, session=session, **params) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 141, in _request > return session.request(url, method, **kwargs) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/keystoneauth1/session.py", line 869, in request > raise exceptions.from_response(resp, method, url) > keystoneauth1.exceptions.http.BadRequest: Bad Request (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 402, in run_subcommand > result = cmd.run(parsed_args) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/command/command.py", line 41, in run > return super(Command, self).run(parsed_args) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/display.py", line 116, in run > column_names, data = self.take_action(parsed_args) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/osc/v2/load_balancer.py", line 121, in take_action > json=body) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 38, in wrapper > request_id=e.request_id) > octaviaclient.api.v2.octavia.OctaviaClientException: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) > clean_up CreateLoadBalancer: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) > Traceback (most recent call last): > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 29, in wrapper > response = func(*args, **kwargs) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 92, in load_balancer_create > response = self.create(url, **params) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 164, in create > ret = self._request(method, url, session=session, **params) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 141, in _request > return session.request(url, method, **kwargs) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/keystoneauth1/session.py", line 869, in request > raise exceptions.from_response(resp, method, url) > keystoneauth1.exceptions.http.BadRequest: Bad Request (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/shell.py", line 135, in run > ret_val = super(OpenStackShell, self).run(argv) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 281, in run > result = self.run_subcommand(remainder) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/shell.py", line 175, in run_subcommand > ret_value = super(OpenStackShell, self).run_subcommand(argv) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 402, in run_subcommand > result = cmd.run(parsed_args) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/command/command.py", line 41, in run > return super(Command, self).run(parsed_args) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/display.py", line 116, in run > column_names, data = self.take_action(parsed_args) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/osc/v2/load_balancer.py", line 121, in take_action > json=body) > File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 38, in wrapper > request_id=e.request_id) > octaviaclient.api.v2.octavia.OctaviaClientException: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978) > > END return value: 1 > ``` > I'm using the following openstack clients and libraries: > > ``` > keystoneauth1==3.11.0 > kolla-ansible==6.1.0 > openstacksdk==0.17.2 > os-client-config==1.31.2 > os-service-types==1.3.0 > osc-lib==1.11.1 > oslo.config==6.5.1 > oslo.context==2.21.0 > oslo.i18n==3.22.1 > oslo.log==3.40.1 > oslo.serialization==2.28.1 > oslo.utils==3.37.0 > python-cinderclient==4.1.0 > python-dateutil==2.7.3 > python-glanceclient==2.12.1 > python-keystoneclient==3.17.0 > python-neutronclient==6.10.0 > python-novaclient==11.0.0 > python-octaviaclient==1.6.0 > python-openstackclient==3.16.1 > ``` > If on the same virtualenv I'm doing: > > `openstack --os-cloud ic-emea-west-az0 --os-region ic-emea-west-az1 network list` > > I correctly get my requested network/subnet id. > > I'm using Kolla to deploy octavia and get the exact same issue with all the kolla 6.0.0 to 6.1.0 serie. > > If anyone have an idea, I'm all in ^^ > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From skaplons at redhat.com Thu Oct 18 20:10:48 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 18 Oct 2018 22:10:48 +0200 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: <78F870CF-14EA-4BC5-BA97-FF0D871ED141@rm.ht> References: <20181018063539.GC6589@thor.bakeyournoodle.com> <4b02964cfa8a43b29a11d98a6606ccd6@AUSX13MPS308.AMER.DELL.COM> <6befdfa3-66ef-2e2d-1d9a-b1ba01f19b79@fried.cc> <78F870CF-14EA-4BC5-BA97-FF0D871ED141@rm.ht> Message-ID: > Wiadomość napisana przez Remo Mattei w dniu 18.10.2018, o godz. 19:08: > > Michal, that will never work it’s 11 characters long Shorter could be Openstack Trouble ;) > > > > >> On Oct 18, 2018, at 09:43, Eric Fried wrote: >> >> Sorry, I'm opposed to this idea. >> >> I admit I don't understand the political framework, nor have I read the >> governing documents beyond [1], but that document makes it clear that >> this is supposed to be a community-wide vote. Is it really legal for >> the TC (or whoever has merge rights on [2]) to merge a patch that gives >> that same body the power to take the decision out of the hands of the >> community? So it's really an oligarchy that gives its constituency the >> illusion of democracy until something comes up that it feels like not >> having a vote on? The fact that it's something relatively "unimportant" >> (this time) is not a comfort. >> >> Not that I think the TC would necessarily move forward with [2] in the >> face of substantial opposition from non-TC "cores" or whatever. >> >> I will vote enthusiastically for "Train". But a vote it should be. >> >> -efried >> >> [1] https://governance.openstack.org/tc/reference/release-naming.html >> [2] https://review.openstack.org/#/c/611511/ >> >> On 10/18/2018 10:52 AM, Arkady.Kanevsky at dell.com wrote: >>> +1 for the poll. >>> >>> Let’s follow well established process. >>> >>> If we want to add Train as one of the options for the name I am OK with it. >>> >>> >>> >>> *From:* Jonathan Mills >>> *Sent:* Thursday, October 18, 2018 10:49 AM >>> *To:* openstack-sigs at lists.openstack.org >>> *Subject:* Re: [Openstack-sigs] [all] Naming the T release of OpenStack >>> >>> >>> >>> [EXTERNAL EMAIL] >>> Please report any suspicious attachments, links, or requests for >>> sensitive information. >>> >>> +1 for just having a poll >>> >>> >>> >>> On Thu, Oct 18, 2018 at 11:39 AM David Medberry >> > wrote: >>> >>> I'm fine with Train but I'm also fine with just adding it to the >>> list and voting on it. It will win. >>> >>> >>> >>> Also, for those not familiar with the debian/ubuntu command "sl", >>> now is the time to become so. >>> >>> >>> >>> apt install sl >>> >>> sl -Flea #ftw >>> >>> >>> >>> On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds >>> > wrote: >>> >>> Hello all, >>> As per [1] the nomination period for names for the T release >>> have >>> now closed (actually 3 days ago sorry). The nominated names and any >>> qualifying remarks can be seen at2]. >>> >>> Proposed Names >>> * Tarryall >>> * Teakettle >>> * Teller >>> * Telluride >>> * Thomas >>> * Thornton >>> * Tiger >>> * Tincup >>> * Timnath >>> * Timber >>> * Tiny Town >>> * Torreys >>> * Trail >>> * Trinidad >>> * Treasure >>> * Troublesome >>> * Trussville >>> * Turret >>> * Tyrone >>> >>> Proposed Names that do not meet the criteria >>> * Train >>> >>> However I'd like to suggest we skip the CIVS poll and select >>> 'Train' as >>> the release name by TC resolution[3]. My think for this is >>> >>> * It's fun and celebrates a humorous moment in our community >>> * As a developer I've heard the T release called Train for quite >>> sometime, and was used often at the PTG[4]. >>> * As the *next* PTG is also in Colorado we can still choose a >>> geographic based name for U[5] >>> * If train causes a problem for trademark reasons then we can >>> always >>> run the poll >>> >>> I'll leave[3] for marked -W for a week for discussion to happen >>> before the >>> TC can consider / vote on it. >>> >>> Yours Tony. >>> >>> [1] >>> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html >>> [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals >>> [3] >>> https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 >>> [4] https://twitter.com/vkmc/status/1040321043959754752 >>> [5] >>> https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z >>> >>> _______________________________________________ >>> openstack-sigs mailing list >>> openstack-sigs at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >>> >>> _______________________________________________ >>> openstack-sigs mailing list >>> openstack-sigs at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >>> >>> >>> >>> _______________________________________________ >>> openstack-sigs mailing list >>> openstack-sigs at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >>> >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From mriedemos at gmail.com Thu Oct 18 22:07:00 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 18 Oct 2018 17:07:00 -0500 Subject: [Openstack-operators] [nova] Removing the CachingScheduler Message-ID: It's been deprecated since Pike, and the time has come to remove it [1]. mgagne has been the most vocal CachingScheduler operator I know and he has tested out the "nova-manage placement heal_allocations" CLI, added in Rocky, and said it will work for migrating his deployment from the CachingScheduler to the FilterScheduler + Placement. If you are using the CachingScheduler and have a problem with its removal, now is the time to speak up or forever hold your peace. [1] https://review.openstack.org/#/c/611723/1 -- Thanks, Matt From michael.d.moore at nasa.gov Thu Oct 18 22:11:40 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Thu, 18 Oct 2018 22:11:40 +0000 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: References: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> Message-ID: I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I’m seeing unexpected behavior in our Queens environment related to > Glance image visibility. Specifically users who, based on my > understanding of the visibility and ownership fields, should NOT be able > to see or view the image. > > If I create a new image with openstack image create and specify –project > and –private a non-admin user in a different tenant can see and > boot that image. > > That seems to be the opposite of what should happen. Any ideas? Yep, something's not right there. Are you sure that the user that can see the image doesn't have the admin role (for the project in its keystone token) ? Did you verify that the image's owner is what you intended, and that the visibility really is "private" ? ~iain _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From bitskrieg at bitskrieg.net Thu Oct 18 22:23:35 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Thu, 18 Oct 2018 18:23:35 -0400 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: References: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> Message-ID: <1668946da70.278c.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> Do you have a liberal/custom policy.json that perhaps is causing unexpected behavior? Can't seem to reproduce this. On October 18, 2018 18:13:22 "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > I have replicated this unexpected behavior in a Pike test environment, in > addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, > INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I > created a new image, set to private with the project defined as our admin > tenant. > > In the database I can see that the image is 'private' and the owner is the > ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> I’m seeing unexpected behavior in our Queens environment related to >> Glance image visibility. Specifically users who, based on my >> understanding of the visibility and ownership fields, should NOT be able >> to see or view the image. >> >> If I create a new image with openstack image create and specify –project >> and –private a non-admin user in a different tenant can see and >> boot that image. >> >> That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From iain.macdonnell at oracle.com Thu Oct 18 22:25:22 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Thu, 18 Oct 2018 15:25:22 -0700 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: References: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> Message-ID: <11e3f7a6-875e-4b6c-259a-147188a860e1@oracle.com> I suspect that your non-admin user is not really non-admin. How did you create it? What you have for "context_is_admin" in glance's policy.json ? ~iain On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I’m seeing unexpected behavior in our Queens environment related to > > Glance image visibility. Specifically users who, based on my > > understanding of the visibility and ownership fields, should NOT be able > > to see or view the image. > > > > If I create a new image with openstack image create and specify –project > > and –private a non-admin user in a different tenant can see and > > boot that image. > > > > That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > From michael.d.moore at nasa.gov Thu Oct 18 22:32:42 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Thu, 18 Oct 2018 22:32:42 +0000 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: <11e3f7a6-875e-4b6c-259a-147188a860e1@oracle.com> References: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> <11e3f7a6-875e-4b6c-259a-147188a860e1@oracle.com> Message-ID: <44085CC4-899C-49B2-9934-0800F6650B0B@nasa.gov> openstack user create --domain default --password xxxxxxxx --project-domain ndc --project test mike openstack role add --user mike --user-domain default --project test user my admin account is in the NDC domain with a different username. /etc/glance/policy.json { "context_is_admin": "role:admin", "default": "role:admin", I'm not terribly familiar with the policies but I feel like that default line is making everyone an admin by default? Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: I suspect that your non-admin user is not really non-admin. How did you create it? What you have for "context_is_admin" in glance's policy.json ? ~iain On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I’m seeing unexpected behavior in our Queens environment related to > > Glance image visibility. Specifically users who, based on my > > understanding of the visibility and ownership fields, should NOT be able > > to see or view the image. > > > > If I create a new image with openstack image create and specify –project > > and –private a non-admin user in a different tenant can see and > > boot that image. > > > > That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > From iain.macdonnell at oracle.com Thu Oct 18 22:48:27 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Thu, 18 Oct 2018 15:48:27 -0700 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: <44085CC4-899C-49B2-9934-0800F6650B0B@nasa.gov> References: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> <11e3f7a6-875e-4b6c-259a-147188a860e1@oracle.com> <44085CC4-899C-49B2-9934-0800F6650B0B@nasa.gov> Message-ID: That all looks fine. I believe that the "default" policy applies in place of any that's not explicitly specified - i.e. "if there's no matching policy below, you need to have the admin role to be able to do it". I do have that line in my policy.json, and I cannot reproduce your problem (see below). I'm not using domains (other than "default"). I wonder if that's a factor... ~iain $ openstack user create --password foo user1 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | d18c0031ec56430499a2d690cb1f125c | | name | user1 | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack user create --password foo user2 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | be9f1061a5104abd834eabe98dff055d | | name | user2 | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack project create project1 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | 826876d6d3724018bae6253c7f540cb3 | | is_domain | False | | name | project1 | | parent_id | default | | tags | [] | +-------------+----------------------------------+ $ openstack project create project2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | b446b93ac6e24d538c1943acbdd13cb2 | | is_domain | False | | name | project2 | | parent_id | default | | tags | [] | +-------------+----------------------------------+ $ openstack role add --user user1 --project project1 _member_ $ openstack role add --user user2 --project project2 _member_ $ export OS_PASSWORD=foo $ export OS_USERNAME=user1 $ export OS_PROJECT_NAME=project1 $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | +--------------------------------------+--------+--------+ $ openstack image create --private image1 +------------------+------------------------------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2018-10-18T22:17:41Z | | disk_format | raw | | file | /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file | | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | | min_disk | 0 | | min_ram | 0 | | name | image1 | | owner | 826876d6d3724018bae6253c7f540cb3 | | properties | locations='[]', os_hash_algo='None', os_hash_value='None', os_hidden='False' | | protected | False | | schema | /v2/schemas/image | | size | None | | status | queued | | tags | | | updated_at | 2018-10-18T22:17:41Z | | virtual_size | None | | visibility | private | +------------------+------------------------------------------------------------------------------+ $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | +--------------------------------------+--------+--------+ $ export OS_USERNAME=user2 $ export OS_PROJECT_NAME=project2 $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | +--------------------------------------+--------+--------+ $ export OS_USERNAME=admin $ export OS_PROJECT_NAME=admin $ export OS_PASSWORD=xxx $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 $ export OS_USERNAME=user2 $ export OS_PROJECT_NAME=project2 $ export OS_PASSWORD=foo $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | +--------------------------------------+--------+--------+ $ On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > openstack user create --domain default --password xxxxxxxx --project-domain ndc --project test mike > > > openstack role add --user mike --user-domain default --project test user > > my admin account is in the NDC domain with a different username. > > > > /etc/glance/policy.json > { > > "context_is_admin": "role:admin", > "default": "role:admin", > > > > > I'm not terribly familiar with the policies but I feel like that default line is making everyone an admin by default? > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: > > > I suspect that your non-admin user is not really non-admin. How did you > create it? > > What you have for "context_is_admin" in glance's policy.json ? > > ~iain > > > On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > > > I’m seeing unexpected behavior in our Queens environment related to > > > Glance image visibility. Specifically users who, based on my > > > understanding of the visibility and ownership fields, should NOT be able > > > to see or view the image. > > > > > > If I create a new image with openstack image create and specify –project > > > and –private a non-admin user in a different tenant can see and > > > boot that image. > > > > > > That seems to be the opposite of what should happen. Any ideas? > > > > Yep, something's not right there. > > > > Are you sure that the user that can see the image doesn't have the admin > > role (for the project in its keystone token) ? > > > > Did you verify that the image's owner is what you intended, and that the > > visibility really is "private" ? > > > > ~iain > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > From bitskrieg at bitskrieg.net Thu Oct 18 23:23:42 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Thu, 18 Oct 2018 19:23:42 -0400 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: References: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> <11e3f7a6-875e-4b6c-259a-147188a860e1@oracle.com> <44085CC4-899C-49B2-9934-0800F6650B0B@nasa.gov> Message-ID: <166897de830.278c.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From vondra at homeatcloud.cz Fri Oct 19 08:58:30 2018 From: vondra at homeatcloud.cz (=?iso-8859-2?Q?Tom=E1=B9_Vondra?=) Date: Fri, 19 Oct 2018 10:58:30 +0200 Subject: [Openstack-operators] osops-tools-monitoring Dependency problems Message-ID: <049e01d46789$e8bf5220$ba3df660$@homeatcloud.cz> Hi! I'm a long time user of monitoring-for-openstack, also known as oschecks. Concretely, I used a version from 2015 with OpenStack python client libraries from Kilo. Now I have upgraded them to Mitaka and it got broken. Even the latest oschecks don't work. I didn't quite expect that, given that there are several commits from this year e.g. by Nagasai Vinaykumar Kapalavai and paramite. Can one of them or some other user step up and say what version of OpenStack clients is oschecks working with? Ideally, write it down in requirements.txt so that it will be reproducible? Also, some documentation of what is the minimal set of parameters would also come in handy. Thanks a lot, Tomas from Homeatcloud The error messages are as absurd as: oschecks-check_glance_api --os_auth_url='http://10.1.101.30:5000/v2.0' --os_username=monitoring --os_password=XXX --os_tenant_name=monitoring CRITICAL: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 121, in safe_run method() File "/usr/lib/python2.7/dist-packages/oschecks/glance.py", line 29, in _check_glance_api glance = utils.Glance() File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 177, in __init__ self.glance.parser = self.glance.get_base_parser(sys.argv) TypeError: get_base_parser() takes exactly 1 argument (2 given) (I can see 4 parameters on the command line.) From christian.zunker at codecentric.cloud Fri Oct 19 09:21:25 2018 From: christian.zunker at codecentric.cloud (Christian Zunker) Date: Fri, 19 Oct 2018 11:21:25 +0200 Subject: [Openstack-operators] [heat][cinder] How to create stack snapshot including volumes Message-ID: Hi List, I'd like to take snapshots of heat stacks including the volumes. >From what I found until now, this should be possible. You just have to configure some parts of OpenStack. I enabled cinder-backup with ceph backend. Backups from volumes are working. I configured heat to include the option backups_enabled = True. When I use openstack stack snapshot create, I get a snapshot but no backups of my volumes. I don't get any error messages in heat. Debug logging didn't help either. OpenStack version is Pike on Ubuntu installed with openstack-ansible. heat version is 9.0.3. So this should also include this bugfix: https://bugs.launchpad.net/heat/+bug/1687006 Is anybody using this feature? What am I missing? Best regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian at fleio.com Fri Oct 19 09:42:00 2018 From: adrian at fleio.com (Adrian Andreias) Date: Fri, 19 Oct 2018 12:42:00 +0300 Subject: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released Message-ID: Hello, We've just released Fleio version 1.1. Fleio is a billing solution and control panel for OpenStack public clouds and traditional web hosters. Fleio software automates the entire process for cloud users. New customers can use Fleio to sign up for an account, pay invoices, add credit to their account, as well as create and manage cloud resources such as virtual machines, storage and networking. Full feature list: https://fleio.com#features You can see an online demo: https://fleio.com/demo And sign-up for a free trial: https://fleio.com/signup Cheers! - Adrian Andreias https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Oct 19 09:54:29 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 19 Oct 2018 20:54:29 +1100 Subject: [Openstack-operators] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: <20181018063539.GC6589@thor.bakeyournoodle.com> References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: <20181019095428.GA9399@thor.bakeyournoodle.com> On Thu, Oct 18, 2018 at 05:35:39PM +1100, Tony Breeds wrote: > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train I have re-worked my openstack/governance change[1] to ask the TC to accept adding Train to the poll as (partially) described in [2]. I present the names above to the community and Foundation marketing team for consideration. The list above does contain Train, clearly if the TC do not approve [1] Train will not be included in the poll when created. I apologise for any offence or slight caused by my previous email in this thread. It was well intentioned albeit, with hindsight, poorly thought through. Yours Tony. [1] https://review.openstack.org/#/c/611511/ [2] https://governance.openstack.org/tc/reference/release-naming.html#release-name-criteria -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From michael.d.moore at nasa.gov Fri Oct 19 16:33:17 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Fri, 19 Oct 2018 16:33:17 +0000 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: <166897de830.278c.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> References: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> <11e3f7a6-875e-4b6c-259a-147188a860e1@oracle.com> <44085CC4-899C-49B2-9934-0800F6650B0B@nasa.gov> <166897de830.278c.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> Message-ID: <4704898B-D193-4540-B106-BF38ACAB68E2@nasa.gov> Our NDC domain is LDAP backed. Default is not. Our keystone policy.json file is empty {} Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 7:24 PM, "Chris Apsey" wrote: We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From michael.d.moore at nasa.gov Fri Oct 19 16:54:12 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Fri, 19 Oct 2018 16:54:12 +0000 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: <4704898B-D193-4540-B106-BF38ACAB68E2@nasa.gov> References: <78B4F109-01F3-4B65-90AD-8A3E74DB5ABB@nasa.gov> <11e3f7a6-875e-4b6c-259a-147188a860e1@oracle.com> <44085CC4-899C-49B2-9934-0800F6650B0B@nasa.gov> <166897de830.278c.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> <4704898B-D193-4540-B106-BF38ACAB68E2@nasa.gov> Message-ID: For reference, here is our full glance policy.json { "context_is_admin": "role:admin", "default": "role:admin", "add_image": "", "delete_image": "", "get_image": "", "get_images": "", "modify_image": "", "publicize_image": "role:admin", "communitize_image": "", "copy_from": "", "download_image": "", "upload_image": "", "delete_image_location": "", "get_image_location": "", "set_image_location": "", "add_member": "", "delete_member": "", "get_member": "", "get_members": "", "modify_member": "", "manage_image_cache": "role:admin", "get_task": "", "get_tasks": "", "add_task": "", "modify_task": "", "tasks_api_access": "role:admin", "deactivate": "", "reactivate": "", "get_metadef_namespace": "", "get_metadef_namespaces":"", "modify_metadef_namespace":"", "add_metadef_namespace":"", "get_metadef_object":"", "get_metadef_objects":"", "modify_metadef_object":"", "add_metadef_object":"", "list_metadef_resource_types":"", "get_metadef_resource_type":"", "add_metadef_resource_type_association":"", "get_metadef_property":"", "get_metadef_properties":"", "modify_metadef_property":"", "add_metadef_property":"", "get_metadef_tag":"", "get_metadef_tags":"", "modify_metadef_tag":"", "add_metadef_tag":"", "add_metadef_tags":"" } Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/19/18, 12:39 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: Our NDC domain is LDAP backed. Default is not. Our keystone policy.json file is empty {} Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 7:24 PM, "Chris Apsey" wrote: We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jaypipes at gmail.com Fri Oct 19 17:45:03 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 19 Oct 2018 13:45:03 -0400 Subject: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released In-Reply-To: References: Message-ID: Please do not use these mailing lists to advertise closed-source/proprietary software solutions. Thank you, -jay On 10/19/2018 05:42 AM, Adrian Andreias wrote: > Hello, > > We've just released Fleio version 1.1. > > Fleio is a billing solution and control panel for OpenStack public > clouds and traditional web hosters. > > Fleio software automates the entire process for cloud users. New > customers can use Fleio to sign up for an account, pay invoices, add > credit to their account, as well as create and manage cloud resources > such as virtual machines, storage and networking. > > Full feature list: > https://fleio.com#features > > You can see an online demo: > https://fleio.com/demo > > And sign-up for a free trial: > https://fleio.com/signup > > > > Cheers! > > - Adrian Andreias > https://fleio.com > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From mnaser at vexxhost.com Fri Oct 19 18:13:40 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 19 Oct 2018 20:13:40 +0200 Subject: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released In-Reply-To: References: Message-ID: On Fri, Oct 19, 2018 at 7:45 PM Jay Pipes wrote: > > Please do not use these mailing lists to advertise > closed-source/proprietary software solutions. +1 > Thank you, > -jay > > On 10/19/2018 05:42 AM, Adrian Andreias wrote: > > Hello, > > > > We've just released Fleio version 1.1. > > > > Fleio is a billing solution and control panel for OpenStack public > > clouds and traditional web hosters. > > > > Fleio software automates the entire process for cloud users. New > > customers can use Fleio to sign up for an account, pay invoices, add > > credit to their account, as well as create and manage cloud resources > > such as virtual machines, storage and networking. > > > > Full feature list: > > https://fleio.com#features > > > > You can see an online demo: > > https://fleio.com/demo > > > > And sign-up for a free trial: > > https://fleio.com/signup > > > > > > > > Cheers! > > > > - Adrian Andreias > > https://fleio.com > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From emccormick at cirrusseven.com Fri Oct 19 18:39:29 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 19 Oct 2018 14:39:29 -0400 Subject: [Openstack-operators] [Octavia] SSL errors polling amphorae and missing tenant network interface Message-ID: I've been wrestling with getting Octavia up and running and have become stuck on two issues. I'm hoping someone has run into these before. My google foo has come up empty. Issue 1: When the Octavia controller tries to poll the amphora instance, it tries repeatedly and eventually fails. The error on the controller side is: 2018-10-19 14:17:39.181 26 ERROR octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries (currently set to 300) exhausted. The amphora is unavailable. Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) On the amphora side I see: [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure (_ssl.c:1754) I've generated certificates both with the script in the Octavia git repo, and with the Openstack Ansible playbook. I can see that they are present in /etc/octavia/certs. I'm using the Kolla (Queens) containers for the control plane so I'm sure I've satisfied all the python library constraints. Issue 2: I"m not sure how it gets configured, but the tenant network interface (ens6) never comes up. I can spawn other instances on that network with no issue, and I can see that Neutron has the port attached to the instance. However, in the instance this is all I get: ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 9000 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe30:c460/64 scope link valid_lft forever preferred_lft forever 3: ens6: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff There's no evidence of the interface anywhere else including udev rules. Any help with either or both issues would be greatly appreciated. Cheers, Erik From gael.therond at gmail.com Fri Oct 19 23:47:42 2018 From: gael.therond at gmail.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Sat, 20 Oct 2018 01:47:42 +0200 Subject: [Openstack-operators] [Octavia] SSL errors polling amphorae and missing tenant network interface In-Reply-To: References: Message-ID: Hi eric! Glad I’m not the only one having this issue with the ssl communication between the amphora and the CP. Even if I don’t yet get a clear answer regarding that issue, I think your second issue is not an issue as the interface is mounted on a namespace and so you’ll need to list all nic even those from namespace. Use an ip netns ls to get the namespace. Hope it will help. Le ven. 19 oct. 2018 à 20:40, Erik McCormick a écrit : > I've been wrestling with getting Octavia up and running and have > become stuck on two issues. I'm hoping someone has run into these > before. My google foo has come up empty. > > Issue 1: > When the Octavia controller tries to poll the amphora instance, it > tries repeatedly and eventually fails. The error on the controller > side is: > > 2018-10-19 14:17:39.181 26 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > retries (currently set to 300) exhausted. The amphora is unavailable. > Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)) > > On the amphora side I see: > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > failure (_ssl.c:1754) > > I've generated certificates both with the script in the Octavia git > repo, and with the Openstack Ansible playbook. I can see that they are > present in /etc/octavia/certs. > > I'm using the Kolla (Queens) containers for the control plane so I'm > sure I've satisfied all the python library constraints. > > Issue 2: > I"m not sure how it gets configured, but the tenant network interface > (ens6) never comes up. I can spawn other instances on that network > with no issue, and I can see that Neutron has the port attached to the > instance. However, in the instance this is all I get: > > ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: ens3: mtu 9000 qdisc pfifo_fast > state UP group default qlen 1000 > link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe30:c460/64 scope link > valid_lft forever preferred_lft forever > 3: ens6: mtu 1500 qdisc noop state DOWN group > default qlen 1000 > link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > There's no evidence of the interface anywhere else including udev rules. > > Any help with either or both issues would be greatly appreciated. > > Cheers, > Erik > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From logan.hicks at live.com Sat Oct 20 00:00:30 2018 From: logan.hicks at live.com (Logan Hicks) Date: Sat, 20 Oct 2018 00:00:30 +0000 Subject: [Openstack-operators] OpenStack-operators Digest, Vol 96, Issue 7 Message-ID: Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Chris Apsey) I noticed that the image says queued. If Im not mistaken, an image cant have permissions applied until after the image is created, which might explain the issue hes seeing. The object doesnt exist until its made by openstack. Id check to see if something is holding up images being made. Id start with glance. Respectfully, Logan Hicks -------- Original message -------- From: openstack-operators-request at lists.openstack.org Date: 10/19/18 7:49 PM (GMT-05:00) To: openstack-operators at lists.openstack.org Subject: OpenStack-operators Digest, Vol 96, Issue 7 Send OpenStack-operators mailing list submissions to openstack-operators at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators or, via email, send a message with subject or body 'help' to openstack-operators-request at lists.openstack.org You can reach the person managing the list at openstack-operators-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of OpenStack-operators digest..." Today's Topics: 1. [nova] Removing the CachingScheduler (Matt Riedemann) 2. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 3. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Chris Apsey) 4. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (iain MacDonnell) 5. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 6. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (iain MacDonnell) 7. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Chris Apsey) 8. osops-tools-monitoring Dependency problems (Tomáš Vondra) 9. [heat][cinder] How to create stack snapshot including volumes (Christian Zunker) 10. Fleio - OpenStack billing - ver. 1.1 released (Adrian Andreias) 11. Re: [Openstack-sigs] [all] Naming the T release of OpenStack (Tony Breeds) 12. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 13. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 14. Re: Fleio - OpenStack billing - ver. 1.1 released (Jay Pipes) 15. Re: Fleio - OpenStack billing - ver. 1.1 released (Mohammed Naser) 16. [Octavia] SSL errors polling amphorae and missing tenant network interface (Erik McCormick) 17. Re: [Octavia] SSL errors polling amphorae and missing tenant network interface (Gaël THEROND) ---------------------------------------------------------------------- Message: 1 Date: Thu, 18 Oct 2018 17:07:00 -0500 From: Matt Riedemann To: "openstack-operators at lists.openstack.org" Subject: [Openstack-operators] [nova] Removing the CachingScheduler Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed It's been deprecated since Pike, and the time has come to remove it [1]. mgagne has been the most vocal CachingScheduler operator I know and he has tested out the "nova-manage placement heal_allocations" CLI, added in Rocky, and said it will work for migrating his deployment from the CachingScheduler to the FilterScheduler + Placement. If you are using the CachingScheduler and have a problem with its removal, now is the time to speak up or forever hold your peace. [1] https://review.openstack.org/#/c/611723/1 -- Thanks, Matt ------------------------------ Message: 2 Date: Thu, 18 Oct 2018 22:11:40 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: Content-Type: text/plain; charset="utf-8" I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I’m seeing unexpected behavior in our Queens environment related to > Glance image visibility. Specifically users who, based on my > understanding of the visibility and ownership fields, should NOT be able > to see or view the image. > > If I create a new image with openstack image create and specify –project > and –private a non-admin user in a different tenant can see and > boot that image. > > That seems to be the opposite of what should happen. Any ideas? Yep, something's not right there. Are you sure that the user that can see the image doesn't have the admin role (for the project in its keystone token) ? Did you verify that the image's owner is what you intended, and that the visibility really is "private" ? ~iain _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 3 Date: Thu, 18 Oct 2018 18:23:35 -0400 From: Chris Apsey To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , iain MacDonnell , Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <1668946da70.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> Content-Type: text/plain; format=flowed; charset="UTF-8" Do you have a liberal/custom policy.json that perhaps is causing unexpected behavior? Can't seem to reproduce this. On October 18, 2018 18:13:22 "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > I have replicated this unexpected behavior in a Pike test environment, in > addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, > INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I > created a new image, set to private with the project defined as our admin > tenant. > > In the database I can see that the image is 'private' and the owner is the > ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> I’m seeing unexpected behavior in our Queens environment related to >> Glance image visibility. Specifically users who, based on my >> understanding of the visibility and ownership fields, should NOT be able >> to see or view the image. >> >> If I create a new image with openstack image create and specify –project >> and –private a non-admin user in a different tenant can see and >> boot that image. >> >> That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 4 Date: Thu, 18 Oct 2018 15:25:22 -0700 From: iain MacDonnell To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <11e3f7a6-875e-4b6c-259a-147188a860e1 at oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed I suspect that your non-admin user is not really non-admin. How did you create it? What you have for "context_is_admin" in glance's policy.json ? ~iain On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I’m seeing unexpected behavior in our Queens environment related to > > Glance image visibility. Specifically users who, based on my > > understanding of the visibility and ownership fields, should NOT be able > > to see or view the image. > > > > If I create a new image with openstack image create and specify –project > > and –private a non-admin user in a different tenant can see and > > boot that image. > > > > That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > ------------------------------ Message: 5 Date: Thu, 18 Oct 2018 22:32:42 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <44085CC4-899C-49B2-9934-0800F6650B0B at nasa.gov> Content-Type: text/plain; charset="utf-8" openstack user create --domain default --password xxxxxxxx --project-domain ndc --project test mike openstack role add --user mike --user-domain default --project test user my admin account is in the NDC domain with a different username. /etc/glance/policy.json { "context_is_admin": "role:admin", "default": "role:admin", I'm not terribly familiar with the policies but I feel like that default line is making everyone an admin by default? Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: I suspect that your non-admin user is not really non-admin. How did you create it? What you have for "context_is_admin" in glance's policy.json ? ~iain On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I’m seeing unexpected behavior in our Queens environment related to > > Glance image visibility. Specifically users who, based on my > > understanding of the visibility and ownership fields, should NOT be able > > to see or view the image. > > > > If I create a new image with openstack image create and specify –project > > and –private a non-admin user in a different tenant can see and > > boot that image. > > > > That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > ------------------------------ Message: 6 Date: Thu, 18 Oct 2018 15:48:27 -0700 From: iain MacDonnell To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed That all looks fine. I believe that the "default" policy applies in place of any that's not explicitly specified - i.e. "if there's no matching policy below, you need to have the admin role to be able to do it". I do have that line in my policy.json, and I cannot reproduce your problem (see below). I'm not using domains (other than "default"). I wonder if that's a factor... ~iain $ openstack user create --password foo user1 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | d18c0031ec56430499a2d690cb1f125c | | name | user1 | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack user create --password foo user2 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | be9f1061a5104abd834eabe98dff055d | | name | user2 | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack project create project1 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | 826876d6d3724018bae6253c7f540cb3 | | is_domain | False | | name | project1 | | parent_id | default | | tags | [] | +-------------+----------------------------------+ $ openstack project create project2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | b446b93ac6e24d538c1943acbdd13cb2 | | is_domain | False | | name | project2 | | parent_id | default | | tags | [] | +-------------+----------------------------------+ $ openstack role add --user user1 --project project1 _member_ $ openstack role add --user user2 --project project2 _member_ $ export OS_PASSWORD=foo $ export OS_USERNAME=user1 $ export OS_PROJECT_NAME=project1 $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | +--------------------------------------+--------+--------+ $ openstack image create --private image1 +------------------+------------------------------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2018-10-18T22:17:41Z | | disk_format | raw | | file | /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file | | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | | min_disk | 0 | | min_ram | 0 | | name | image1 | | owner | 826876d6d3724018bae6253c7f540cb3 | | properties | locations='[]', os_hash_algo='None', os_hash_value='None', os_hidden='False' | | protected | False | | schema | /v2/schemas/image | | size | None | | status | queued | | tags | | | updated_at | 2018-10-18T22:17:41Z | | virtual_size | None | | visibility | private | +------------------+------------------------------------------------------------------------------+ $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | +--------------------------------------+--------+--------+ $ export OS_USERNAME=user2 $ export OS_PROJECT_NAME=project2 $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | +--------------------------------------+--------+--------+ $ export OS_USERNAME=admin $ export OS_PROJECT_NAME=admin $ export OS_PASSWORD=xxx $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 $ export OS_USERNAME=user2 $ export OS_PROJECT_NAME=project2 $ export OS_PASSWORD=foo $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | +--------------------------------------+--------+--------+ $ On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > openstack user create --domain default --password xxxxxxxx --project-domain ndc --project test mike > > > openstack role add --user mike --user-domain default --project test user > > my admin account is in the NDC domain with a different username. > > > > /etc/glance/policy.json > { > > "context_is_admin": "role:admin", > "default": "role:admin", > > > > > I'm not terribly familiar with the policies but I feel like that default line is making everyone an admin by default? > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: > > > I suspect that your non-admin user is not really non-admin. How did you > create it? > > What you have for "context_is_admin" in glance's policy.json ? > > ~iain > > > On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > > > I’m seeing unexpected behavior in our Queens environment related to > > > Glance image visibility. Specifically users who, based on my > > > understanding of the visibility and ownership fields, should NOT be able > > > to see or view the image. > > > > > > If I create a new image with openstack image create and specify –project > > > and –private a non-admin user in a different tenant can see and > > > boot that image. > > > > > > That seems to be the opposite of what should happen. Any ideas? > > > > Yep, something's not right there. > > > > Are you sure that the user that can see the image doesn't have the admin > > role (for the project in its keystone token) ? > > > > Did you verify that the image's owner is what you intended, and that the > > visibility really is "private" ? > > > > ~iain > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > ------------------------------ Message: 7 Date: Thu, 18 Oct 2018 19:23:42 -0400 From: Chris Apsey To: iain MacDonnell , "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <166897de830.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> Content-Type: text/plain; format=flowed; charset="UTF-8" We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 8 Date: Fri, 19 Oct 2018 10:58:30 +0200 From: Tomáš Vondra To: Subject: [Openstack-operators] osops-tools-monitoring Dependency problems Message-ID: <049e01d46789$e8bf5220$ba3df660$@homeatcloud.cz> Content-Type: text/plain; charset="iso-8859-2" Hi! I'm a long time user of monitoring-for-openstack, also known as oschecks. Concretely, I used a version from 2015 with OpenStack python client libraries from Kilo. Now I have upgraded them to Mitaka and it got broken. Even the latest oschecks don't work. I didn't quite expect that, given that there are several commits from this year e.g. by Nagasai Vinaykumar Kapalavai and paramite. Can one of them or some other user step up and say what version of OpenStack clients is oschecks working with? Ideally, write it down in requirements.txt so that it will be reproducible? Also, some documentation of what is the minimal set of parameters would also come in handy. Thanks a lot, Tomas from Homeatcloud The error messages are as absurd as: oschecks-check_glance_api --os_auth_url='http://10.1.101.30:5000/v2.0' --os_username=monitoring --os_password=XXX --os_tenant_name=monitoring CRITICAL: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 121, in safe_run method() File "/usr/lib/python2.7/dist-packages/oschecks/glance.py", line 29, in _check_glance_api glance = utils.Glance() File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 177, in __init__ self.glance.parser = self.glance.get_base_parser(sys.argv) TypeError: get_base_parser() takes exactly 1 argument (2 given) (I can see 4 parameters on the command line.) ------------------------------ Message: 9 Date: Fri, 19 Oct 2018 11:21:25 +0200 From: Christian Zunker To: openstack-operators Subject: [Openstack-operators] [heat][cinder] How to create stack snapshot including volumes Message-ID: Content-Type: text/plain; charset="utf-8" Hi List, I'd like to take snapshots of heat stacks including the volumes. >From what I found until now, this should be possible. You just have to configure some parts of OpenStack. I enabled cinder-backup with ceph backend. Backups from volumes are working. I configured heat to include the option backups_enabled = True. When I use openstack stack snapshot create, I get a snapshot but no backups of my volumes. I don't get any error messages in heat. Debug logging didn't help either. OpenStack version is Pike on Ubuntu installed with openstack-ansible. heat version is 9.0.3. So this should also include this bugfix: https://bugs.launchpad.net/heat/+bug/1687006 Is anybody using this feature? What am I missing? Best regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 10 Date: Fri, 19 Oct 2018 12:42:00 +0300 From: Adrian Andreias To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released Message-ID: Content-Type: text/plain; charset="utf-8" Hello, We've just released Fleio version 1.1. Fleio is a billing solution and control panel for OpenStack public clouds and traditional web hosters. Fleio software automates the entire process for cloud users. New customers can use Fleio to sign up for an account, pay invoices, add credit to their account, as well as create and manage cloud resources such as virtual machines, storage and networking. Full feature list: https://fleio.com#features You can see an online demo: https://fleio.com/demo And sign-up for a free trial: https://fleio.com/signup Cheers! - Adrian Andreias https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 11 Date: Fri, 19 Oct 2018 20:54:29 +1100 From: Tony Breeds To: OpenStack Development , OpenStack SIGs , OpenStack Operators Subject: Re: [Openstack-operators] [Openstack-sigs] [all] Naming the T release of OpenStack Message-ID: <20181019095428.GA9399 at thor.bakeyournoodle.com> Content-Type: text/plain; charset="utf-8" On Thu, Oct 18, 2018 at 05:35:39PM +1100, Tony Breeds wrote: > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train I have re-worked my openstack/governance change[1] to ask the TC to accept adding Train to the poll as (partially) described in [2]. I present the names above to the community and Foundation marketing team for consideration. The list above does contain Train, clearly if the TC do not approve [1] Train will not be included in the poll when created. I apologise for any offence or slight caused by my previous email in this thread. It was well intentioned albeit, with hindsight, poorly thought through. Yours Tony. [1] https://review.openstack.org/#/c/611511/ [2] https://governance.openstack.org/tc/reference/release-naming.html#release-name-criteria -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: ------------------------------ Message: 12 Date: Fri, 19 Oct 2018 16:33:17 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: Chris Apsey , iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <4704898B-D193-4540-B106-BF38ACAB68E2 at nasa.gov> Content-Type: text/plain; charset="utf-8" Our NDC domain is LDAP backed. Default is not. Our keystone policy.json file is empty {} Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 7:24 PM, "Chris Apsey" wrote: We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 13 Date: Fri, 19 Oct 2018 16:54:12 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: Chris Apsey , iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: Content-Type: text/plain; charset="utf-8" For reference, here is our full glance policy.json { "context_is_admin": "role:admin", "default": "role:admin", "add_image": "", "delete_image": "", "get_image": "", "get_images": "", "modify_image": "", "publicize_image": "role:admin", "communitize_image": "", "copy_from": "", "download_image": "", "upload_image": "", "delete_image_location": "", "get_image_location": "", "set_image_location": "", "add_member": "", "delete_member": "", "get_member": "", "get_members": "", "modify_member": "", "manage_image_cache": "role:admin", "get_task": "", "get_tasks": "", "add_task": "", "modify_task": "", "tasks_api_access": "role:admin", "deactivate": "", "reactivate": "", "get_metadef_namespace": "", "get_metadef_namespaces":"", "modify_metadef_namespace":"", "add_metadef_namespace":"", "get_metadef_object":"", "get_metadef_objects":"", "modify_metadef_object":"", "add_metadef_object":"", "list_metadef_resource_types":"", "get_metadef_resource_type":"", "add_metadef_resource_type_association":"", "get_metadef_property":"", "get_metadef_properties":"", "modify_metadef_property":"", "add_metadef_property":"", "get_metadef_tag":"", "get_metadef_tags":"", "modify_metadef_tag":"", "add_metadef_tag":"", "add_metadef_tags":"" } Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/19/18, 12:39 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: Our NDC domain is LDAP backed. Default is not. Our keystone policy.json file is empty {} Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 7:24 PM, "Chris Apsey" wrote: We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 14 Date: Fri, 19 Oct 2018 13:45:03 -0400 From: Jay Pipes To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed Please do not use these mailing lists to advertise closed-source/proprietary software solutions. Thank you, -jay On 10/19/2018 05:42 AM, Adrian Andreias wrote: > Hello, > > We've just released Fleio version 1.1. > > Fleio is a billing solution and control panel for OpenStack public > clouds and traditional web hosters. > > Fleio software automates the entire process for cloud users. New > customers can use Fleio to sign up for an account, pay invoices, add > credit to their account, as well as create and manage cloud resources > such as virtual machines, storage and networking. > > Full feature list: > https://fleio.com#features > > You can see an online demo: > https://fleio.com/demo > > And sign-up for a free trial: > https://fleio.com/signup > > > > Cheers! > > - Adrian Andreias > https://fleio.com > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > ------------------------------ Message: 15 Date: Fri, 19 Oct 2018 20:13:40 +0200 From: Mohammed Naser To: jaypipes at gmail.com Cc: openstack-operators Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released Message-ID: Content-Type: text/plain; charset="UTF-8" On Fri, Oct 19, 2018 at 7:45 PM Jay Pipes wrote: > > Please do not use these mailing lists to advertise > closed-source/proprietary software solutions. +1 > Thank you, > -jay > > On 10/19/2018 05:42 AM, Adrian Andreias wrote: > > Hello, > > > > We've just released Fleio version 1.1. > > > > Fleio is a billing solution and control panel for OpenStack public > > clouds and traditional web hosters. > > > > Fleio software automates the entire process for cloud users. New > > customers can use Fleio to sign up for an account, pay invoices, add > > credit to their account, as well as create and manage cloud resources > > such as virtual machines, storage and networking. > > > > Full feature list: > > https://fleio.com#features > > > > You can see an online demo: > > https://fleio.com/demo > > > > And sign-up for a free trial: > > https://fleio.com/signup > > > > > > > > Cheers! > > > > - Adrian Andreias > > https://fleio.com > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com ------------------------------ Message: 16 Date: Fri, 19 Oct 2018 14:39:29 -0400 From: Erik McCormick To: openstack-operators Subject: [Openstack-operators] [Octavia] SSL errors polling amphorae and missing tenant network interface Message-ID: Content-Type: text/plain; charset="UTF-8" I've been wrestling with getting Octavia up and running and have become stuck on two issues. I'm hoping someone has run into these before. My google foo has come up empty. Issue 1: When the Octavia controller tries to poll the amphora instance, it tries repeatedly and eventually fails. The error on the controller side is: 2018-10-19 14:17:39.181 26 ERROR octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries (currently set to 300) exhausted. The amphora is unavailable. Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) On the amphora side I see: [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure (_ssl.c:1754) I've generated certificates both with the script in the Octavia git repo, and with the Openstack Ansible playbook. I can see that they are present in /etc/octavia/certs. I'm using the Kolla (Queens) containers for the control plane so I'm sure I've satisfied all the python library constraints. Issue 2: I"m not sure how it gets configured, but the tenant network interface (ens6) never comes up. I can spawn other instances on that network with no issue, and I can see that Neutron has the port attached to the instance. However, in the instance this is all I get: ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 9000 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe30:c460/64 scope link valid_lft forever preferred_lft forever 3: ens6: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff There's no evidence of the interface anywhere else including udev rules. Any help with either or both issues would be greatly appreciated. Cheers, Erik ------------------------------ Message: 17 Date: Sat, 20 Oct 2018 01:47:42 +0200 From: Gaël THEROND To: Erik McCormick Cc: openstack-operators Subject: Re: [Openstack-operators] [Octavia] SSL errors polling amphorae and missing tenant network interface Message-ID: Content-Type: text/plain; charset="utf-8" Hi eric! Glad I’m not the only one having this issue with the ssl communication between the amphora and the CP. Even if I don’t yet get a clear answer regarding that issue, I think your second issue is not an issue as the interface is mounted on a namespace and so you’ll need to list all nic even those from namespace. Use an ip netns ls to get the namespace. Hope it will help. Le ven. 19 oct. 2018 à 20:40, Erik McCormick a écrit : > I've been wrestling with getting Octavia up and running and have > become stuck on two issues. I'm hoping someone has run into these > before. My google foo has come up empty. > > Issue 1: > When the Octavia controller tries to poll the amphora instance, it > tries repeatedly and eventually fails. The error on the controller > side is: > > 2018-10-19 14:17:39.181 26 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > retries (currently set to 300) exhausted. The amphora is unavailable. > Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)) > > On the amphora side I see: > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > failure (_ssl.c:1754) > > I've generated certificates both with the script in the Octavia git > repo, and with the Openstack Ansible playbook. I can see that they are > present in /etc/octavia/certs. > > I'm using the Kolla (Queens) containers for the control plane so I'm > sure I've satisfied all the python library constraints. > > Issue 2: > I"m not sure how it gets configured, but the tenant network interface > (ens6) never comes up. I can spawn other instances on that network > with no issue, and I can see that Neutron has the port attached to the > instance. However, in the instance this is all I get: > > ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: ens3: mtu 9000 qdisc pfifo_fast > state UP group default qlen 1000 > link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe30:c460/64 scope link > valid_lft forever preferred_lft forever > 3: ens6: mtu 1500 qdisc noop state DOWN group > default qlen 1000 > link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > There's no evidence of the interface anywhere else including udev rules. > > Any help with either or both issues would be greatly appreciated. > > Cheers, > Erik > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ End of OpenStack-operators Digest, Vol 96, Issue 7 ************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.d.moore at nasa.gov Sat Oct 20 23:15:16 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Sat, 20 Oct 2018 23:15:16 +0000 Subject: [Openstack-operators] OpenStack-operators Digest, Vol 96, Issue 7 In-Reply-To: References: Message-ID: The images exist and are bootable. I'm going to trace through the actual code for glance API. Any suggestions on where the show/hide logic is when it filters responses? I'm new to digging through OpenStack code. ________________________________ From: Logan Hicks [logan.hicks at live.com] Sent: Friday, October 19, 2018 8:00 PM To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] OpenStack-operators Digest, Vol 96, Issue 7 Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Chris Apsey) I noticed that the image says queued. If Im not mistaken, an image cant have permissions applied until after the image is created, which might explain the issue hes seeing. The object doesnt exist until its made by openstack. Id check to see if something is holding up images being made. Id start with glance. Respectfully, Logan Hicks -------- Original message -------- From: openstack-operators-request at lists.openstack.org Date: 10/19/18 7:49 PM (GMT-05:00) To: openstack-operators at lists.openstack.org Subject: OpenStack-operators Digest, Vol 96, Issue 7 Send OpenStack-operators mailing list submissions to openstack-operators at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators or, via email, send a message with subject or body 'help' to openstack-operators-request at lists.openstack.org You can reach the person managing the list at openstack-operators-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of OpenStack-operators digest..." Today's Topics: 1. [nova] Removing the CachingScheduler (Matt Riedemann) 2. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 3. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Chris Apsey) 4. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (iain MacDonnell) 5. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 6. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (iain MacDonnell) 7. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Chris Apsey) 8. osops-tools-monitoring Dependency problems (Tomáš Vondra) 9. [heat][cinder] How to create stack snapshot including volumes (Christian Zunker) 10. Fleio - OpenStack billing - ver. 1.1 released (Adrian Andreias) 11. Re: [Openstack-sigs] [all] Naming the T release of OpenStack (Tony Breeds) 12. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 13. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 14. Re: Fleio - OpenStack billing - ver. 1.1 released (Jay Pipes) 15. Re: Fleio - OpenStack billing - ver. 1.1 released (Mohammed Naser) 16. [Octavia] SSL errors polling amphorae and missing tenant network interface (Erik McCormick) 17. Re: [Octavia] SSL errors polling amphorae and missing tenant network interface (Gaël THEROND) ---------------------------------------------------------------------- Message: 1 Date: Thu, 18 Oct 2018 17:07:00 -0500 From: Matt Riedemann To: "openstack-operators at lists.openstack.org" Subject: [Openstack-operators] [nova] Removing the CachingScheduler Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed It's been deprecated since Pike, and the time has come to remove it [1]. mgagne has been the most vocal CachingScheduler operator I know and he has tested out the "nova-manage placement heal_allocations" CLI, added in Rocky, and said it will work for migrating his deployment from the CachingScheduler to the FilterScheduler + Placement. If you are using the CachingScheduler and have a problem with its removal, now is the time to speak up or forever hold your peace. [1] https://review.openstack.org/#/c/611723/1 -- Thanks, Matt ------------------------------ Message: 2 Date: Thu, 18 Oct 2018 22:11:40 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: Content-Type: text/plain; charset="utf-8" I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I’m seeing unexpected behavior in our Queens environment related to > Glance image visibility. Specifically users who, based on my > understanding of the visibility and ownership fields, should NOT be able > to see or view the image. > > If I create a new image with openstack image create and specify –project > and –private a non-admin user in a different tenant can see and > boot that image. > > That seems to be the opposite of what should happen. Any ideas? Yep, something's not right there. Are you sure that the user that can see the image doesn't have the admin role (for the project in its keystone token) ? Did you verify that the image's owner is what you intended, and that the visibility really is "private" ? ~iain _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 3 Date: Thu, 18 Oct 2018 18:23:35 -0400 From: Chris Apsey To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , iain MacDonnell , Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <1668946da70.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> Content-Type: text/plain; format=flowed; charset="UTF-8" Do you have a liberal/custom policy.json that perhaps is causing unexpected behavior? Can't seem to reproduce this. On October 18, 2018 18:13:22 "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > I have replicated this unexpected behavior in a Pike test environment, in > addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, > INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I > created a new image, set to private with the project defined as our admin > tenant. > > In the database I can see that the image is 'private' and the owner is the > ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> I’m seeing unexpected behavior in our Queens environment related to >> Glance image visibility. Specifically users who, based on my >> understanding of the visibility and ownership fields, should NOT be able >> to see or view the image. >> >> If I create a new image with openstack image create and specify –project >> and –private a non-admin user in a different tenant can see and >> boot that image. >> >> That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 4 Date: Thu, 18 Oct 2018 15:25:22 -0700 From: iain MacDonnell To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <11e3f7a6-875e-4b6c-259a-147188a860e1 at oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed I suspect that your non-admin user is not really non-admin. How did you create it? What you have for "context_is_admin" in glance's policy.json ? ~iain On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I’m seeing unexpected behavior in our Queens environment related to > > Glance image visibility. Specifically users who, based on my > > understanding of the visibility and ownership fields, should NOT be able > > to see or view the image. > > > > If I create a new image with openstack image create and specify –project > > and –private a non-admin user in a different tenant can see and > > boot that image. > > > > That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > ------------------------------ Message: 5 Date: Thu, 18 Oct 2018 22:32:42 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <44085CC4-899C-49B2-9934-0800F6650B0B at nasa.gov> Content-Type: text/plain; charset="utf-8" openstack user create --domain default --password xxxxxxxx --project-domain ndc --project test mike openstack role add --user mike --user-domain default --project test user my admin account is in the NDC domain with a different username. /etc/glance/policy.json { "context_is_admin": "role:admin", "default": "role:admin", I'm not terribly familiar with the policies but I feel like that default line is making everyone an admin by default? Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: I suspect that your non-admin user is not really non-admin. How did you create it? What you have for "context_is_admin" in glance's policy.json ? ~iain On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I’m seeing unexpected behavior in our Queens environment related to > > Glance image visibility. Specifically users who, based on my > > understanding of the visibility and ownership fields, should NOT be able > > to see or view the image. > > > > If I create a new image with openstack image create and specify –project > > and –private a non-admin user in a different tenant can see and > > boot that image. > > > > That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > ------------------------------ Message: 6 Date: Thu, 18 Oct 2018 15:48:27 -0700 From: iain MacDonnell To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed That all looks fine. I believe that the "default" policy applies in place of any that's not explicitly specified - i.e. "if there's no matching policy below, you need to have the admin role to be able to do it". I do have that line in my policy.json, and I cannot reproduce your problem (see below). I'm not using domains (other than "default"). I wonder if that's a factor... ~iain $ openstack user create --password foo user1 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | d18c0031ec56430499a2d690cb1f125c | | name | user1 | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack user create --password foo user2 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | be9f1061a5104abd834eabe98dff055d | | name | user2 | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack project create project1 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | 826876d6d3724018bae6253c7f540cb3 | | is_domain | False | | name | project1 | | parent_id | default | | tags | [] | +-------------+----------------------------------+ $ openstack project create project2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | b446b93ac6e24d538c1943acbdd13cb2 | | is_domain | False | | name | project2 | | parent_id | default | | tags | [] | +-------------+----------------------------------+ $ openstack role add --user user1 --project project1 _member_ $ openstack role add --user user2 --project project2 _member_ $ export OS_PASSWORD=foo $ export OS_USERNAME=user1 $ export OS_PROJECT_NAME=project1 $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | +--------------------------------------+--------+--------+ $ openstack image create --private image1 +------------------+------------------------------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2018-10-18T22:17:41Z | | disk_format | raw | | file | /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file | | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | | min_disk | 0 | | min_ram | 0 | | name | image1 | | owner | 826876d6d3724018bae6253c7f540cb3 | | properties | locations='[]', os_hash_algo='None', os_hash_value='None', os_hidden='False' | | protected | False | | schema | /v2/schemas/image | | size | None | | status | queued | | tags | | | updated_at | 2018-10-18T22:17:41Z | | virtual_size | None | | visibility | private | +------------------+------------------------------------------------------------------------------+ $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | +--------------------------------------+--------+--------+ $ export OS_USERNAME=user2 $ export OS_PROJECT_NAME=project2 $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | +--------------------------------------+--------+--------+ $ export OS_USERNAME=admin $ export OS_PROJECT_NAME=admin $ export OS_PASSWORD=xxx $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 $ export OS_USERNAME=user2 $ export OS_PROJECT_NAME=project2 $ export OS_PASSWORD=foo $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | +--------------------------------------+--------+--------+ $ On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > openstack user create --domain default --password xxxxxxxx --project-domain ndc --project test mike > > > openstack role add --user mike --user-domain default --project test user > > my admin account is in the NDC domain with a different username. > > > > /etc/glance/policy.json > { > > "context_is_admin": "role:admin", > "default": "role:admin", > > > > > I'm not terribly familiar with the policies but I feel like that default line is making everyone an admin by default? > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: > > > I suspect that your non-admin user is not really non-admin. How did you > create it? > > What you have for "context_is_admin" in glance's policy.json ? > > ~iain > > > On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > > > I’m seeing unexpected behavior in our Queens environment related to > > > Glance image visibility. Specifically users who, based on my > > > understanding of the visibility and ownership fields, should NOT be able > > > to see or view the image. > > > > > > If I create a new image with openstack image create and specify –project > > > and –private a non-admin user in a different tenant can see and > > > boot that image. > > > > > > That seems to be the opposite of what should happen. Any ideas? > > > > Yep, something's not right there. > > > > Are you sure that the user that can see the image doesn't have the admin > > role (for the project in its keystone token) ? > > > > Did you verify that the image's owner is what you intended, and that the > > visibility really is "private" ? > > > > ~iain > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > ------------------------------ Message: 7 Date: Thu, 18 Oct 2018 19:23:42 -0400 From: Chris Apsey To: iain MacDonnell , "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <166897de830.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> Content-Type: text/plain; format=flowed; charset="UTF-8" We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 8 Date: Fri, 19 Oct 2018 10:58:30 +0200 From: Tomáš Vondra To: Subject: [Openstack-operators] osops-tools-monitoring Dependency problems Message-ID: <049e01d46789$e8bf5220$ba3df660$@homeatcloud.cz> Content-Type: text/plain; charset="iso-8859-2" Hi! I'm a long time user of monitoring-for-openstack, also known as oschecks. Concretely, I used a version from 2015 with OpenStack python client libraries from Kilo. Now I have upgraded them to Mitaka and it got broken. Even the latest oschecks don't work. I didn't quite expect that, given that there are several commits from this year e.g. by Nagasai Vinaykumar Kapalavai and paramite. Can one of them or some other user step up and say what version of OpenStack clients is oschecks working with? Ideally, write it down in requirements.txt so that it will be reproducible? Also, some documentation of what is the minimal set of parameters would also come in handy. Thanks a lot, Tomas from Homeatcloud The error messages are as absurd as: oschecks-check_glance_api --os_auth_url='http://10.1.101.30:5000/v2.0' --os_username=monitoring --os_password=XXX --os_tenant_name=monitoring CRITICAL: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 121, in safe_run method() File "/usr/lib/python2.7/dist-packages/oschecks/glance.py", line 29, in _check_glance_api glance = utils.Glance() File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 177, in __init__ self.glance.parser = self.glance.get_base_parser(sys.argv) TypeError: get_base_parser() takes exactly 1 argument (2 given) (I can see 4 parameters on the command line.) ------------------------------ Message: 9 Date: Fri, 19 Oct 2018 11:21:25 +0200 From: Christian Zunker To: openstack-operators Subject: [Openstack-operators] [heat][cinder] How to create stack snapshot including volumes Message-ID: Content-Type: text/plain; charset="utf-8" Hi List, I'd like to take snapshots of heat stacks including the volumes. >From what I found until now, this should be possible. You just have to configure some parts of OpenStack. I enabled cinder-backup with ceph backend. Backups from volumes are working. I configured heat to include the option backups_enabled = True. When I use openstack stack snapshot create, I get a snapshot but no backups of my volumes. I don't get any error messages in heat. Debug logging didn't help either. OpenStack version is Pike on Ubuntu installed with openstack-ansible. heat version is 9.0.3. So this should also include this bugfix: https://bugs.launchpad.net/heat/+bug/1687006 Is anybody using this feature? What am I missing? Best regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 10 Date: Fri, 19 Oct 2018 12:42:00 +0300 From: Adrian Andreias To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released Message-ID: Content-Type: text/plain; charset="utf-8" Hello, We've just released Fleio version 1.1. Fleio is a billing solution and control panel for OpenStack public clouds and traditional web hosters. Fleio software automates the entire process for cloud users. New customers can use Fleio to sign up for an account, pay invoices, add credit to their account, as well as create and manage cloud resources such as virtual machines, storage and networking. Full feature list: https://fleio.com#features You can see an online demo: https://fleio.com/demo And sign-up for a free trial: https://fleio.com/signup Cheers! - Adrian Andreias https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 11 Date: Fri, 19 Oct 2018 20:54:29 +1100 From: Tony Breeds To: OpenStack Development , OpenStack SIGs , OpenStack Operators Subject: Re: [Openstack-operators] [Openstack-sigs] [all] Naming the T release of OpenStack Message-ID: <20181019095428.GA9399 at thor.bakeyournoodle.com> Content-Type: text/plain; charset="utf-8" On Thu, Oct 18, 2018 at 05:35:39PM +1100, Tony Breeds wrote: > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train I have re-worked my openstack/governance change[1] to ask the TC to accept adding Train to the poll as (partially) described in [2]. I present the names above to the community and Foundation marketing team for consideration. The list above does contain Train, clearly if the TC do not approve [1] Train will not be included in the poll when created. I apologise for any offence or slight caused by my previous email in this thread. It was well intentioned albeit, with hindsight, poorly thought through. Yours Tony. [1] https://review.openstack.org/#/c/611511/ [2] https://governance.openstack.org/tc/reference/release-naming.html#release-name-criteria -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: ------------------------------ Message: 12 Date: Fri, 19 Oct 2018 16:33:17 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: Chris Apsey , iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <4704898B-D193-4540-B106-BF38ACAB68E2 at nasa.gov> Content-Type: text/plain; charset="utf-8" Our NDC domain is LDAP backed. Default is not. Our keystone policy.json file is empty {} Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 7:24 PM, "Chris Apsey" wrote: We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 13 Date: Fri, 19 Oct 2018 16:54:12 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: Chris Apsey , iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: Content-Type: text/plain; charset="utf-8" For reference, here is our full glance policy.json { "context_is_admin": "role:admin", "default": "role:admin", "add_image": "", "delete_image": "", "get_image": "", "get_images": "", "modify_image": "", "publicize_image": "role:admin", "communitize_image": "", "copy_from": "", "download_image": "", "upload_image": "", "delete_image_location": "", "get_image_location": "", "set_image_location": "", "add_member": "", "delete_member": "", "get_member": "", "get_members": "", "modify_member": "", "manage_image_cache": "role:admin", "get_task": "", "get_tasks": "", "add_task": "", "modify_task": "", "tasks_api_access": "role:admin", "deactivate": "", "reactivate": "", "get_metadef_namespace": "", "get_metadef_namespaces":"", "modify_metadef_namespace":"", "add_metadef_namespace":"", "get_metadef_object":"", "get_metadef_objects":"", "modify_metadef_object":"", "add_metadef_object":"", "list_metadef_resource_types":"", "get_metadef_resource_type":"", "add_metadef_resource_type_association":"", "get_metadef_property":"", "get_metadef_properties":"", "modify_metadef_property":"", "add_metadef_property":"", "get_metadef_tag":"", "get_metadef_tags":"", "modify_metadef_tag":"", "add_metadef_tag":"", "add_metadef_tags":"" } Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/19/18, 12:39 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: Our NDC domain is LDAP backed. Default is not. Our keystone policy.json file is empty {} Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 7:24 PM, "Chris Apsey" wrote: We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 14 Date: Fri, 19 Oct 2018 13:45:03 -0400 From: Jay Pipes To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed Please do not use these mailing lists to advertise closed-source/proprietary software solutions. Thank you, -jay On 10/19/2018 05:42 AM, Adrian Andreias wrote: > Hello, > > We've just released Fleio version 1.1. > > Fleio is a billing solution and control panel for OpenStack public > clouds and traditional web hosters. > > Fleio software automates the entire process for cloud users. New > customers can use Fleio to sign up for an account, pay invoices, add > credit to their account, as well as create and manage cloud resources > such as virtual machines, storage and networking. > > Full feature list: > https://fleio.com#features > > You can see an online demo: > https://fleio.com/demo > > And sign-up for a free trial: > https://fleio.com/signup > > > > Cheers! > > - Adrian Andreias > https://fleio.com > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > ------------------------------ Message: 15 Date: Fri, 19 Oct 2018 20:13:40 +0200 From: Mohammed Naser To: jaypipes at gmail.com Cc: openstack-operators Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released Message-ID: Content-Type: text/plain; charset="UTF-8" On Fri, Oct 19, 2018 at 7:45 PM Jay Pipes wrote: > > Please do not use these mailing lists to advertise > closed-source/proprietary software solutions. +1 > Thank you, > -jay > > On 10/19/2018 05:42 AM, Adrian Andreias wrote: > > Hello, > > > > We've just released Fleio version 1.1. > > > > Fleio is a billing solution and control panel for OpenStack public > > clouds and traditional web hosters. > > > > Fleio software automates the entire process for cloud users. New > > customers can use Fleio to sign up for an account, pay invoices, add > > credit to their account, as well as create and manage cloud resources > > such as virtual machines, storage and networking. > > > > Full feature list: > > https://fleio.com#features > > > > You can see an online demo: > > https://fleio.com/demo > > > > And sign-up for a free trial: > > https://fleio.com/signup > > > > > > > > Cheers! > > > > - Adrian Andreias > > https://fleio.com > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com ------------------------------ Message: 16 Date: Fri, 19 Oct 2018 14:39:29 -0400 From: Erik McCormick To: openstack-operators Subject: [Openstack-operators] [Octavia] SSL errors polling amphorae and missing tenant network interface Message-ID: Content-Type: text/plain; charset="UTF-8" I've been wrestling with getting Octavia up and running and have become stuck on two issues. I'm hoping someone has run into these before. My google foo has come up empty. Issue 1: When the Octavia controller tries to poll the amphora instance, it tries repeatedly and eventually fails. The error on the controller side is: 2018-10-19 14:17:39.181 26 ERROR octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries (currently set to 300) exhausted. The amphora is unavailable. Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) On the amphora side I see: [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure (_ssl.c:1754) I've generated certificates both with the script in the Octavia git repo, and with the Openstack Ansible playbook. I can see that they are present in /etc/octavia/certs. I'm using the Kolla (Queens) containers for the control plane so I'm sure I've satisfied all the python library constraints. Issue 2: I"m not sure how it gets configured, but the tenant network interface (ens6) never comes up. I can spawn other instances on that network with no issue, and I can see that Neutron has the port attached to the instance. However, in the instance this is all I get: ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 9000 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe30:c460/64 scope link valid_lft forever preferred_lft forever 3: ens6: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff There's no evidence of the interface anywhere else including udev rules. Any help with either or both issues would be greatly appreciated. Cheers, Erik ------------------------------ Message: 17 Date: Sat, 20 Oct 2018 01:47:42 +0200 From: Gaël THEROND To: Erik McCormick Cc: openstack-operators Subject: Re: [Openstack-operators] [Octavia] SSL errors polling amphorae and missing tenant network interface Message-ID: Content-Type: text/plain; charset="utf-8" Hi eric! Glad I’m not the only one having this issue with the ssl communication between the amphora and the CP. Even if I don’t yet get a clear answer regarding that issue, I think your second issue is not an issue as the interface is mounted on a namespace and so you’ll need to list all nic even those from namespace. Use an ip netns ls to get the namespace. Hope it will help. Le ven. 19 oct. 2018 à 20:40, Erik McCormick a écrit : > I've been wrestling with getting Octavia up and running and have > become stuck on two issues. I'm hoping someone has run into these > before. My google foo has come up empty. > > Issue 1: > When the Octavia controller tries to poll the amphora instance, it > tries repeatedly and eventually fails. The error on the controller > side is: > > 2018-10-19 14:17:39.181 26 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > retries (currently set to 300) exhausted. The amphora is unavailable. > Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)) > > On the amphora side I see: > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > failure (_ssl.c:1754) > > I've generated certificates both with the script in the Octavia git > repo, and with the Openstack Ansible playbook. I can see that they are > present in /etc/octavia/certs. > > I'm using the Kolla (Queens) containers for the control plane so I'm > sure I've satisfied all the python library constraints. > > Issue 2: > I"m not sure how it gets configured, but the tenant network interface > (ens6) never comes up. I can spawn other instances on that network > with no issue, and I can see that Neutron has the port attached to the > instance. However, in the instance this is all I get: > > ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: ens3: mtu 9000 qdisc pfifo_fast > state UP group default qlen 1000 > link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe30:c460/64 scope link > valid_lft forever preferred_lft forever > 3: ens6: mtu 1500 qdisc noop state DOWN group > default qlen 1000 > link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > There's no evidence of the interface anywhere else including udev rules. > > Any help with either or both issues would be greatly appreciated. > > Cheers, > Erik > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ End of OpenStack-operators Digest, Vol 96, Issue 7 ************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Mon Oct 22 08:25:36 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 22 Oct 2018 10:25:36 +0200 Subject: [Openstack-operators] [openstack-dev] [Octavia] SSL errors polling amphorae and missing tenant network interface In-Reply-To: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> Message-ID: +operators My bad. On 10/22/2018 10:22 AM, Tobias Urdin wrote: > Hello, > > I've been having a lot of issues with SSL certificates myself, on my > second trip now trying to get it working. > > Before I spent a lot of time walking through every line in the DevStack > plugin and fixing my config options, used the generate > script [1] and still it didn't work. > > When I got the "invalid padding" issue it was because of the DN I used > for the CA and the certificate IIRC. > > > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa > routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), > ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), > ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > > 19:47 < tobias-urdin> after a quick google "The problem was that my > CA DN was the same as the certificate DN." > > IIRC I think that solved it, but then again I wouldn't remember fully > since I've been at so many different angles by now. > > Here is my IRC logs history from the #openstack-lbaas channel, perhaps > it can help you out > http://paste.openstack.org/show/732575/ > > ----- > > Sorry for hijacking the thread but I'm stuck as well. > > I've in the past tried to generate the certificates with [1] but now > moved on to using the openstack-ansible way of generating them [2] > with some modifications. > > Right now I'm just getting: Could not connect to instance. Retrying.: > SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) > from the amphoras, haven't got any further but I've eliminated a lot of > stuck in the middle. > > Tried deploying Ocatavia on Ubuntu with python3 to just make sure there > wasn't an issue with CentOS and OpenSSL versions since it tends to lag > behind. > Checking the amphora with openssl s_client [3] it gives the same one, > but the verification is successful just that I don't understand what the > bad signature > part is about, from browsing some OpenSSL code it seems to be related to > RSA signatures somehow. > > 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad > signature:s3_clnt.c:2032: > > So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS > (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm > back to something related > to the certificates or the communication between the endpoints, or what > actually responds inside the amphora (gunicorn IIUC?). Based on the > "verify" functions actually causing that bad signature error I would > assume it's the generated certificate that the amphora presents that is > causing it. > > I'll have to continue the troubleshooting to the inside of the amphora, > I've used the test-only amphora image before but have now built my own > one that is > using the amphora-agent from the actual stable branch, but same issue > (bad signature). > > For verbosity this is the config options set for the certificates in > octavia.conf and which file it was copied from [4], same here, a > replication of what openstack-ansible does. > > Appreciate any feedback or help :) > > Best regards > Tobias > > [1] > https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh > [2] http://paste.openstack.org/show/732483/ > [3] http://paste.openstack.org/show/732486/ > [4] http://paste.openstack.org/show/732487/ > > On 10/20/2018 01:53 AM, Michael Johnson wrote: >> Hi Erik, >> >> Sorry to hear you are still having certificate issues. >> >> Issue #2 is probably caused by issue #1. Since we hot-plug the tenant >> network for the VIP, one of the first steps after the worker connects >> to the amphora agent is finishing the required configuration of the >> VIP interface inside the network namespace on the amphroa. >> >> If I remember correctly, you are attempting to configure Octavia with >> the dual CA option (which is good for non-development use). >> >> This is what I have for notes: >> >> [certificates] gets the following: >> cert_generator = local_cert_generator >> ca_certificate = server CA's "server.pem" file >> ca_private_key = server CA's "server.key" file >> ca_private_key_passphrase = pass phrase for ca_private_key >> [controller_worker] >> client_ca = Client CA's ca_cert file >> [haproxy_amphora] >> client_cert = Client CA's client.pem file (I think with it's key >> concatenated is what rm_work said the other day) >> server_ca = Server CA's ca_cert file >> >> That said, I can probably run through this and write something up next >> week that is more step-by-step/detailed. >> >> Michael >> >> On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick >> wrote: >>> Apologies for cross-posting, but in the event that these might be >>> worth filing as bugs, I wanted the Octavia devs to see it as well... >>> >>> I've been wrestling with getting Octavia up and running and have >>> become stuck on two issues. I'm hoping someone has run into these >>> before. My google foo has come up empty. >>> >>> Issue 1: >>> When the Octavia controller tries to poll the amphora instance, it >>> tries repeatedly and eventually fails. The error on the controller >>> side is: >>> >>> 2018-10-19 14:17:39.181 26 ERROR >>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection >>> retries (currently set to 300) exhausted. The amphora is unavailable. >>> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries >>> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by >>> SSLError(SSLError("bad handshake: Error([('rsa routines', >>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >>> 'tls_process_server_certificate', 'certificate verify >>> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', >>> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 >>> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', >>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >>> 'tls_process_server_certificate', 'certificate verify >>> failed')],)",),)) >>> >>> On the amphora side I see: >>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. >>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from >>> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake >>> failure (_ssl.c:1754) >>> >>> I've generated certificates both with the script in the Octavia git >>> repo, and with the Openstack Ansible playbook. I can see that they are >>> present in /etc/octavia/certs. >>> >>> I'm using the Kolla (Queens) containers for the control plane so I'm >>> sure I've satisfied all the python library constraints. >>> >>> Issue 2: >>> I"m not sure how it gets configured, but the tenant network interface >>> (ens6) never comes up. I can spawn other instances on that network >>> with no issue, and I can see that Neutron has the port attached to the >>> instance. However, in the instance this is all I get: >>> >>> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a >>> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >>> group default qlen 1 >>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >>> inet 127.0.0.1/8 scope host lo >>> valid_lft forever preferred_lft forever >>> inet6 ::1/128 scope host >>> valid_lft forever preferred_lft forever >>> 2: ens3: mtu 9000 qdisc pfifo_fast >>> state UP group default qlen 1000 >>> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff >>> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 >>> valid_lft forever preferred_lft forever >>> inet6 fe80::f816:3eff:fe30:c460/64 scope link >>> valid_lft forever preferred_lft forever >>> 3: ens6: mtu 1500 qdisc noop state DOWN group >>> default qlen 1000 >>> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff >>> >>> There's no evidence of the interface anywhere else including udev rules. >>> >>> Any help with either or both issues would be greatly appreciated. >>> >>> Cheers, >>> Erik >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > From tobias.urdin at binero.se Mon Oct 22 08:28:25 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 22 Oct 2018 10:28:25 +0200 Subject: [Openstack-operators] [Octavia] SSL errors polling amphorae and missing tenant network interface In-Reply-To: References: Message-ID: Hello, Seems we are quite a few having difficulties getting it to work. I missed adding operators ML to my previous reply, sent it again. I'm at the point where SSL pretty much becomes a hassle for operations, if there was an option to just go with a shared secret I would've done a while ago, which probably says a lot about the amount of time on this. Best regards Tobias On 10/20/2018 01:58 AM, Gaël THEROND wrote: > Hi eric! > > Glad I’m not the only one having this issue with the ssl communication > between the amphora and the CP. > > Even if I don’t yet get a clear answer regarding that issue, I think > your second issue is not an issue as the interface is mounted on a > namespace and so you’ll need to list all nic even those from namespace. > > Use an ip netns ls to get the namespace. > > Hope it will help. > > Le ven. 19 oct. 2018 à 20:40, Erik McCormick > > a écrit : > > I've been wrestling with getting Octavia up and running and have > become stuck on two issues. I'm hoping someone has run into these > before. My google foo has come up empty. > > Issue 1: > When the Octavia controller tries to poll the amphora instance, it > tries repeatedly and eventually fails. The error on the controller > side is: > > 2018-10-19 14:17:39.181 26 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > retries (currently set to 300) exhausted.  The amphora is unavailable. > Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > exceeded with url: /0.5/plug/vip/10.250.20.15 > (Caused by > SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > port=9443): Max retries exceeded with url: > /0.5/plug/vip/10.250.20.15 > (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)) > > On the amphora side I see: > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL > request. > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > ip=::ffff:10.7.0.40 : [SSL: > SSL_HANDSHAKE_FAILURE] ssl handshake > failure (_ssl.c:1754) > > I've generated certificates both with the script in the Octavia git > repo, and with the Openstack Ansible playbook. I can see that they are > present in /etc/octavia/certs. > > I'm using the Kolla (Queens) containers for the control plane so I'm > sure I've satisfied all the python library constraints. > > Issue 2: > I"m not sure how it gets configured, but the tenant network interface > (ens6) never comes up. I can spawn other instances on that network > with no issue, and I can see that Neutron has the port attached to the > instance. However, in the instance this is all I get: > > ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1 >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >     inet 127.0.0.1/8 scope host lo >        valid_lft forever preferred_lft forever >     inet6 ::1/128 scope host >        valid_lft forever preferred_lft forever > 2: ens3: mtu 9000 qdisc pfifo_fast > state UP group default qlen 1000 >     link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff >     inet 10.7.0.112/16 brd 10.7.255.255 > scope global ens3 >        valid_lft forever preferred_lft forever >     inet6 fe80::f816:3eff:fe30:c460/64 scope link >        valid_lft forever preferred_lft forever > 3: ens6: mtu 1500 qdisc noop state DOWN group > default qlen 1000 >     link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > There's no evidence of the interface anywhere else including udev > rules. > > Any help with either or both issues would be greatly appreciated. > > Cheers, > Erik > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Mon Oct 22 14:50:06 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Mon, 22 Oct 2018 10:50:06 -0400 Subject: [Openstack-operators] [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> Message-ID: Oops, dropped Operators. Can't wait until it's all one list... On Mon, Oct 22, 2018 at 10:44 AM Erik McCormick wrote: > > On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin wrote: > > > > Hello, > > > > I've been having a lot of issues with SSL certificates myself, on my > > second trip now trying to get it working. > > > > Before I spent a lot of time walking through every line in the DevStack > > plugin and fixing my config options, used the generate > > script [1] and still it didn't work. > > > > When I got the "invalid padding" issue it was because of the DN I used > > for the CA and the certificate IIRC. > > > > > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING > > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > > to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa > > routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), > > ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), > > ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > > > 19:47 < tobias-urdin> after a quick google "The problem was that my > > CA DN was the same as the certificate DN." > > > > IIRC I think that solved it, but then again I wouldn't remember fully > > since I've been at so many different angles by now. > > > > Here is my IRC logs history from the #openstack-lbaas channel, perhaps > > it can help you out > > http://paste.openstack.org/show/732575/ > > > > Tobias, I owe you a beer. This was precisely the issue. I'm deploying > Octavia with kolla-ansible. It only deploys a single CA. After hacking > the templates and playbook to incorporate a separate server CA, the > amphorae now load and provision the required namespace. I'm adding a > kolla tag to the subject of this in hopes that someone might want to > take on changing this behavior in the project. Hopefully after I get > through Upstream Institute in Berlin I'll be able to do it myself if > nobody else wants to do it. > > For certificate generation, I extracted the contents of > octavia_certs_install.yml (which sets up the directory structure, > openssl.cnf, and the client CA), and octavia_certs.yml (which creates > the server CA and the client certificate) and mashed them into a > separate playbook just for this purpose. At the end I get: > > ca_01.pem - Client CA Certificate > ca_01.key - Client CA Key > ca_server_01.pem - Server CA Certificate > cakey.pem - Server CA Key > client.pem - Concatenated Client Key and Certificate > > If it would help to have the playbook, I can stick it up on github > with a huge "This is a hack" disclaimer on it. > > > ----- > > > > Sorry for hijacking the thread but I'm stuck as well. > > > > I've in the past tried to generate the certificates with [1] but now > > moved on to using the openstack-ansible way of generating them [2] > > with some modifications. > > > > Right now I'm just getting: Could not connect to instance. Retrying.: > > SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) > > from the amphoras, haven't got any further but I've eliminated a lot of > > stuck in the middle. > > > > Tried deploying Ocatavia on Ubuntu with python3 to just make sure there > > wasn't an issue with CentOS and OpenSSL versions since it tends to lag > > behind. > > Checking the amphora with openssl s_client [3] it gives the same one, > > but the verification is successful just that I don't understand what the > > bad signature > > part is about, from browsing some OpenSSL code it seems to be related to > > RSA signatures somehow. > > > > 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad > > signature:s3_clnt.c:2032: > > > > So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS > > (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm > > back to something related > > to the certificates or the communication between the endpoints, or what > > actually responds inside the amphora (gunicorn IIUC?). Based on the > > "verify" functions actually causing that bad signature error I would > > assume it's the generated certificate that the amphora presents that is > > causing it. > > > > I'll have to continue the troubleshooting to the inside of the amphora, > > I've used the test-only amphora image before but have now built my own > > one that is > > using the amphora-agent from the actual stable branch, but same issue > > (bad signature). > > > > For verbosity this is the config options set for the certificates in > > octavia.conf and which file it was copied from [4], same here, a > > replication of what openstack-ansible does. > > > > Appreciate any feedback or help :) > > > > Best regards > > Tobias > > > > [1] > > https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh > > [2] http://paste.openstack.org/show/732483/ > > [3] http://paste.openstack.org/show/732486/ > > [4] http://paste.openstack.org/show/732487/ > > > > On 10/20/2018 01:53 AM, Michael Johnson wrote: > > > Hi Erik, > > > > > > Sorry to hear you are still having certificate issues. > > > > > > Issue #2 is probably caused by issue #1. Since we hot-plug the tenant > > > network for the VIP, one of the first steps after the worker connects > > > to the amphora agent is finishing the required configuration of the > > > VIP interface inside the network namespace on the amphroa. > > > > Thanks for the hint on the workflow of this. I hadn't gotten deep > enough into the code to find that yet, but I suspected it was blocking > since the namespace never got created either. Thanks > > > > If I remember correctly, you are attempting to configure Octavia with > > > the dual CA option (which is good for non-development use). > > > > > > This is what I have for notes: > > > > > > [certificates] gets the following: > > > cert_generator = local_cert_generator > > > ca_certificate = server CA's "server.pem" file > > > ca_private_key = server CA's "server.key" file > > > ca_private_key_passphrase = pass phrase for ca_private_key > > > [controller_worker] > > > client_ca = Client CA's ca_cert file > > > [haproxy_amphora] > > > client_cert = Client CA's client.pem file (I think with it's key > > > concatenated is what rm_work said the other day) > > > server_ca = Server CA's ca_cert file > > > > > This is all very helpful. It's a bit difficult to know what goes where > the way the documentation is written presently. For something that's > going to be the defacto standard for loadbalancing, we as a community > need to do a better job of documenting how to set up, configure, and > manage this in production. I'm trying to capture my lessons learned > and processes as I go to help with that if I can. > > -Erik > > > > That said, I can probably run through this and write something up next > > > week that is more step-by-step/detailed. > > > > > > Michael > > > > > > On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick > > > wrote: > > >> Apologies for cross-posting, but in the event that these might be > > >> worth filing as bugs, I wanted the Octavia devs to see it as well... > > >> > > >> I've been wrestling with getting Octavia up and running and have > > >> become stuck on two issues. I'm hoping someone has run into these > > >> before. My google foo has come up empty. > > >> > > >> Issue 1: > > >> When the Octavia controller tries to poll the amphora instance, it > > >> tries repeatedly and eventually fails. The error on the controller > > >> side is: > > >> > > >> 2018-10-19 14:17:39.181 26 ERROR > > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > > >> retries (currently set to 300) exhausted. The amphora is unavailable. > > >> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > > >> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > > >> SSLError(SSLError("bad handshake: Error([('rsa routines', > > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > > >> 'tls_process_server_certificate', 'certificate verify > > >> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > > >> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > > >> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > > >> 'tls_process_server_certificate', 'certificate verify > > >> failed')],)",),)) > > >> > > >> On the amphora side I see: > > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > > >> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > > >> failure (_ssl.c:1754) > > >> > > >> I've generated certificates both with the script in the Octavia git > > >> repo, and with the Openstack Ansible playbook. I can see that they are > > >> present in /etc/octavia/certs. > > >> > > >> I'm using the Kolla (Queens) containers for the control plane so I'm > > >> sure I've satisfied all the python library constraints. > > >> > > >> Issue 2: > > >> I"m not sure how it gets configured, but the tenant network interface > > >> (ens6) never comes up. I can spawn other instances on that network > > >> with no issue, and I can see that Neutron has the port attached to the > > >> instance. However, in the instance this is all I get: > > >> > > >> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > > >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > >> group default qlen 1 > > >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > >> inet 127.0.0.1/8 scope host lo > > >> valid_lft forever preferred_lft forever > > >> inet6 ::1/128 scope host > > >> valid_lft forever preferred_lft forever > > >> 2: ens3: mtu 9000 qdisc pfifo_fast > > >> state UP group default qlen 1000 > > >> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > > >> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > > >> valid_lft forever preferred_lft forever > > >> inet6 fe80::f816:3eff:fe30:c460/64 scope link > > >> valid_lft forever preferred_lft forever > > >> 3: ens6: mtu 1500 qdisc noop state DOWN group > > >> default qlen 1000 > > >> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > >> > > >> There's no evidence of the interface anywhere else including udev rules. > > >> > > >> Any help with either or both issues would be greatly appreciated. > > >> > > >> Cheers, > > >> Erik > > >> > > >> __________________________________________________________________________ > > >> OpenStack Development Mailing List (not for usage questions) > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gael.therond at gmail.com Mon Oct 22 16:13:06 2018 From: gael.therond at gmail.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Mon, 22 Oct 2018 18:13:06 +0200 Subject: [Openstack-operators] [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> Message-ID: Doing the same documentation process here as well (except that I’m using kolla). The only annoying thing is the doc submission process :-/. Le lun. 22 oct. 2018 à 16:50, Erik McCormick a écrit : > Oops, dropped Operators. Can't wait until it's all one list... > On Mon, Oct 22, 2018 at 10:44 AM Erik McCormick > wrote: > > > > On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin > wrote: > > > > > > Hello, > > > > > > I've been having a lot of issues with SSL certificates myself, on my > > > second trip now trying to get it working. > > > > > > Before I spent a lot of time walking through every line in the DevStack > > > plugin and fixing my config options, used the generate > > > script [1] and still it didn't work. > > > > > > When I got the "invalid padding" issue it was because of the DN I used > > > for the CA and the certificate IIRC. > > > > > > > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING > > > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > > > to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa > > > routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), > > > ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), > > > ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > > > > 19:47 < tobias-urdin> after a quick google "The problem was that my > > > CA DN was the same as the certificate DN." > > > > > > IIRC I think that solved it, but then again I wouldn't remember fully > > > since I've been at so many different angles by now. > > > > > > Here is my IRC logs history from the #openstack-lbaas channel, perhaps > > > it can help you out > > > http://paste.openstack.org/show/732575/ > > > > > > > Tobias, I owe you a beer. This was precisely the issue. I'm deploying > > Octavia with kolla-ansible. It only deploys a single CA. After hacking > > the templates and playbook to incorporate a separate server CA, the > > amphorae now load and provision the required namespace. I'm adding a > > kolla tag to the subject of this in hopes that someone might want to > > take on changing this behavior in the project. Hopefully after I get > > through Upstream Institute in Berlin I'll be able to do it myself if > > nobody else wants to do it. > > > > For certificate generation, I extracted the contents of > > octavia_certs_install.yml (which sets up the directory structure, > > openssl.cnf, and the client CA), and octavia_certs.yml (which creates > > the server CA and the client certificate) and mashed them into a > > separate playbook just for this purpose. At the end I get: > > > > ca_01.pem - Client CA Certificate > > ca_01.key - Client CA Key > > ca_server_01.pem - Server CA Certificate > > cakey.pem - Server CA Key > > client.pem - Concatenated Client Key and Certificate > > > > If it would help to have the playbook, I can stick it up on github > > with a huge "This is a hack" disclaimer on it. > > > > > ----- > > > > > > Sorry for hijacking the thread but I'm stuck as well. > > > > > > I've in the past tried to generate the certificates with [1] but now > > > moved on to using the openstack-ansible way of generating them [2] > > > with some modifications. > > > > > > Right now I'm just getting: Could not connect to instance. Retrying.: > > > SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) > > > from the amphoras, haven't got any further but I've eliminated a lot of > > > stuck in the middle. > > > > > > Tried deploying Ocatavia on Ubuntu with python3 to just make sure there > > > wasn't an issue with CentOS and OpenSSL versions since it tends to lag > > > behind. > > > Checking the amphora with openssl s_client [3] it gives the same one, > > > but the verification is successful just that I don't understand what > the > > > bad signature > > > part is about, from browsing some OpenSSL code it seems to be related > to > > > RSA signatures somehow. > > > > > > 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad > > > signature:s3_clnt.c:2032: > > > > > > So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS > > > (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm > > > back to something related > > > to the certificates or the communication between the endpoints, or what > > > actually responds inside the amphora (gunicorn IIUC?). Based on the > > > "verify" functions actually causing that bad signature error I would > > > assume it's the generated certificate that the amphora presents that is > > > causing it. > > > > > > I'll have to continue the troubleshooting to the inside of the amphora, > > > I've used the test-only amphora image before but have now built my own > > > one that is > > > using the amphora-agent from the actual stable branch, but same issue > > > (bad signature). > > > > > > For verbosity this is the config options set for the certificates in > > > octavia.conf and which file it was copied from [4], same here, a > > > replication of what openstack-ansible does. > > > > > > Appreciate any feedback or help :) > > > > > > Best regards > > > Tobias > > > > > > [1] > > > > https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh > > > [2] http://paste.openstack.org/show/732483/ > > > [3] http://paste.openstack.org/show/732486/ > > > [4] http://paste.openstack.org/show/732487/ > > > > > > On 10/20/2018 01:53 AM, Michael Johnson wrote: > > > > Hi Erik, > > > > > > > > Sorry to hear you are still having certificate issues. > > > > > > > > Issue #2 is probably caused by issue #1. Since we hot-plug the tenant > > > > network for the VIP, one of the first steps after the worker connects > > > > to the amphora agent is finishing the required configuration of the > > > > VIP interface inside the network namespace on the amphroa. > > > > > > Thanks for the hint on the workflow of this. I hadn't gotten deep > > enough into the code to find that yet, but I suspected it was blocking > > since the namespace never got created either. Thanks > > > > > > If I remember correctly, you are attempting to configure Octavia with > > > > the dual CA option (which is good for non-development use). > > > > > > > > This is what I have for notes: > > > > > > > > [certificates] gets the following: > > > > cert_generator = local_cert_generator > > > > ca_certificate = server CA's "server.pem" file > > > > ca_private_key = server CA's "server.key" file > > > > ca_private_key_passphrase = pass phrase for ca_private_key > > > > [controller_worker] > > > > client_ca = Client CA's ca_cert file > > > > [haproxy_amphora] > > > > client_cert = Client CA's client.pem file (I think with it's key > > > > concatenated is what rm_work said the other day) > > > > server_ca = Server CA's ca_cert file > > > > > > > > This is all very helpful. It's a bit difficult to know what goes where > > the way the documentation is written presently. For something that's > > going to be the defacto standard for loadbalancing, we as a community > > need to do a better job of documenting how to set up, configure, and > > manage this in production. I'm trying to capture my lessons learned > > and processes as I go to help with that if I can. > > > > -Erik > > > > > > That said, I can probably run through this and write something up > next > > > > week that is more step-by-step/detailed. > > > > > > > > Michael > > > > > > > > On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick > > > > wrote: > > > >> Apologies for cross-posting, but in the event that these might be > > > >> worth filing as bugs, I wanted the Octavia devs to see it as well... > > > >> > > > >> I've been wrestling with getting Octavia up and running and have > > > >> become stuck on two issues. I'm hoping someone has run into these > > > >> before. My google foo has come up empty. > > > >> > > > >> Issue 1: > > > >> When the Octavia controller tries to poll the amphora instance, it > > > >> tries repeatedly and eventually fails. The error on the controller > > > >> side is: > > > >> > > > >> 2018-10-19 14:17:39.181 26 ERROR > > > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > > > >> retries (currently set to 300) exhausted. The amphora is > unavailable. > > > >> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max > retries > > > >> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > > > >> SSLError(SSLError("bad handshake: Error([('rsa routines', > > > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa > routines', > > > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > > > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > > > >> 'tls_process_server_certificate', 'certificate verify > > > >> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > > > >> port=9443): Max retries exceeded with url: /0.5/plug/vip/ > 10.250.20.15 > > > >> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > > > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa > routines', > > > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > > > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > > > >> 'tls_process_server_certificate', 'certificate verify > > > >> failed')],)",),)) > > > >> > > > >> On the amphora side I see: > > > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL > request. > > > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > > > >> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > > > >> failure (_ssl.c:1754) > > > >> > > > >> I've generated certificates both with the script in the Octavia git > > > >> repo, and with the Openstack Ansible playbook. I can see that they > are > > > >> present in /etc/octavia/certs. > > > >> > > > >> I'm using the Kolla (Queens) containers for the control plane so I'm > > > >> sure I've satisfied all the python library constraints. > > > >> > > > >> Issue 2: > > > >> I"m not sure how it gets configured, but the tenant network > interface > > > >> (ens6) never comes up. I can spawn other instances on that network > > > >> with no issue, and I can see that Neutron has the port attached to > the > > > >> instance. However, in the instance this is all I get: > > > >> > > > >> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > > > >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > >> group default qlen 1 > > > >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > >> inet 127.0.0.1/8 scope host lo > > > >> valid_lft forever preferred_lft forever > > > >> inet6 ::1/128 scope host > > > >> valid_lft forever preferred_lft forever > > > >> 2: ens3: mtu 9000 qdisc pfifo_fast > > > >> state UP group default qlen 1000 > > > >> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > > > >> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > > > >> valid_lft forever preferred_lft forever > > > >> inet6 fe80::f816:3eff:fe30:c460/64 scope link > > > >> valid_lft forever preferred_lft forever > > > >> 3: ens6: mtu 1500 qdisc noop state DOWN group > > > >> default qlen 1000 > > > >> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > > >> > > > >> There's no evidence of the interface anywhere else including udev > rules. > > > >> > > > >> Any help with either or both issues would be greatly appreciated. > > > >> > > > >> Cheers, > > > >> Erik > > > >> > > > >> > __________________________________________________________________________ > > > >> OpenStack Development Mailing List (not for usage questions) > > > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Tue Oct 23 11:52:15 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Tue, 23 Oct 2018 13:52:15 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: Message-ID: Hi, We did test Octavia with Pike (DVR deployment) and everything was working right our of the box. We changed our underlay network to a Layer3 spine-leaf network now and did not deploy DVR as we don't wanted to have that much cables in a rack. Octavia is not working right now as the lb-mgmt-net does not exist on the compute nodes nor does a br-ex. The control nodes running octavia_worker octavia_housekeeping octavia_health_manager octavia_api and as far as I understood octavia_worker, octavia_housekeeping and octavia_health_manager have to talk to the amphora instances. But the control nodes are spread over three different leafs. So each control node in a different L2 domain. So the question is how to deploy a lb-mgmt-net network in our setup? - Compute nodes have no "stretched" L2 domain - Control nodes, compute nodes and network nodes are in L3 networks like api, storage, ... - Only network nodes are connected to a L2 domain (with a separated NIC) providing the "public" network All the best, Florian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From emccormick at cirrusseven.com Tue Oct 23 13:20:44 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 23 Oct 2018 09:20:44 -0400 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: Message-ID: On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann wrote: > > Hi, > > We did test Octavia with Pike (DVR deployment) and everything was > working right our of the box. We changed our underlay network to a > Layer3 spine-leaf network now and did not deploy DVR as we don't wanted > to have that much cables in a rack. > > Octavia is not working right now as the lb-mgmt-net does not exist on > the compute nodes nor does a br-ex. > > The control nodes running > > octavia_worker > octavia_housekeeping > octavia_health_manager > octavia_api > > and as far as I understood octavia_worker, octavia_housekeeping and > octavia_health_manager have to talk to the amphora instances. But the > control nodes are spread over three different leafs. So each control > node in a different L2 domain. > > So the question is how to deploy a lb-mgmt-net network in our setup? > > - Compute nodes have no "stretched" L2 domain > - Control nodes, compute nodes and network nodes are in L3 networks like > api, storage, ... > - Only network nodes are connected to a L2 domain (with a separated NIC) > providing the "public" network > You'll need to add a new bridge to your compute nodes and create a provider network associated with that bridge. In my setup this is simply a flat network tied to a tagged interface. In your case it probably makes more sense to make a new VNI and create a vxlan provider network. The routing in your switches should handle the rest. -Erik > > All the best, > Florian > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From florian.engelmann at everyware.ch Tue Oct 23 14:09:05 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Tue, 23 Oct 2018 16:09:05 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: Message-ID: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: > On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann > wrote: >> >> Hi, >> >> We did test Octavia with Pike (DVR deployment) and everything was >> working right our of the box. We changed our underlay network to a >> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted >> to have that much cables in a rack. >> >> Octavia is not working right now as the lb-mgmt-net does not exist on >> the compute nodes nor does a br-ex. >> >> The control nodes running >> >> octavia_worker >> octavia_housekeeping >> octavia_health_manager >> octavia_api >> >> and as far as I understood octavia_worker, octavia_housekeeping and >> octavia_health_manager have to talk to the amphora instances. But the >> control nodes are spread over three different leafs. So each control >> node in a different L2 domain. >> >> So the question is how to deploy a lb-mgmt-net network in our setup? >> >> - Compute nodes have no "stretched" L2 domain >> - Control nodes, compute nodes and network nodes are in L3 networks like >> api, storage, ... >> - Only network nodes are connected to a L2 domain (with a separated NIC) >> providing the "public" network >> > You'll need to add a new bridge to your compute nodes and create a > provider network associated with that bridge. In my setup this is > simply a flat network tied to a tagged interface. In your case it > probably makes more sense to make a new VNI and create a vxlan > provider network. The routing in your switches should handle the rest. Ok that's what I try right now. But I don't get how to setup something like a VxLAN provider Network. I thought only vlan and flat is supported as provider network? I guess it is not possible to use the tunnel interface that is used for tenant networks? So I have to create a separated VxLAN on the control and compute nodes like: # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1 dev vlan3535 ttl 5 # ip addr add 172.16.1.11/20 dev vxoctavia # ip link set vxoctavia up and use it like a flat provider network, true? > > -Erik >> >> All the best, >> Florian >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From gael.therond at gmail.com Tue Oct 23 14:09:28 2018 From: gael.therond at gmail.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Tue, 23 Oct 2018 16:09:28 +0200 Subject: [Openstack-operators] [OCTAVIA][QUEENS][KOLLA] - Amphora to Health-manager invalid UDP heartbeat. Message-ID: Hi guys, I'm finishing to work on my POC for Octavia and after solving few issues with my configuration I'm close to get a properly working setup. However, I'm facing a small but yet annoying bug with the health-manager receiving amphora heartbeat UDP packet which it consider as not correct and so drop it. Here are the messages that can be found in logs: *2018-10-23 13:53:21.844 25 WARNING octavia.amphorae.backends.health_daemon.status_message [-] calculated hmac: faf73e41a0f843b826ee581c3995b7f7e56b5e5a294fca0b84eda426766f8415 not equal to msg hmac: 6137613337316432636365393832376431343337306537353066626130653261 dropping packet* Which come from this part of the HM Code: https://docs.openstack.org/octavia/pike/_modules/octavia/amphorae/backends/health_daemon/status_message.html#get_payload The annoying thing is that I don't get why the UDP packet is considered as stale and how can I try to reproduce the payload which is send to the HealthManager. I'm willing to write a simple PY program to simulate the heartbeat payload but I don't now what's exactly the message and I think I miss some informations. Both HealthManager and the Amphora do use the same heartbeat_key and both can contact on the network as the initial Health-manager to Amphora 9443 connection is validated. As an effect to this situation, my loadbalancer is stuck in PENDING_UPDATE mode. Do you have any idea on how can I handle such thing or if it's something already seen out there for anyone else? Kind regards, G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Tue Oct 23 15:57:53 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Tue, 23 Oct 2018 17:57:53 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: Message-ID: Is there any kolla-ansible Octavia guide? Am 10/23/18 um 1:52 PM schrieb Florian Engelmann: > Hi, > > We did test Octavia with Pike (DVR deployment) and everything was > working right our of the box. We changed our underlay network to a > Layer3 spine-leaf network now and did not deploy DVR as we don't wanted > to have that much cables in a rack. > > Octavia is not working right now as the lb-mgmt-net does not exist on > the compute nodes nor does a br-ex. > > The control nodes running > > octavia_worker > octavia_housekeeping > octavia_health_manager > octavia_api > > and as far as I understood octavia_worker, octavia_housekeeping and > octavia_health_manager have to talk to the amphora instances. But the > control nodes are spread over three different leafs. So each control > node in a different L2 domain. > > So the question is how to deploy a lb-mgmt-net network in our setup? > > - Compute nodes have no "stretched" L2 domain > - Control nodes, compute nodes and network nodes are in L3 networks like > api, storage, ... > - Only network nodes are connected to a L2 domain (with a separated NIC) > providing the "public" network > > > All the best, > Florian > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From emccormick at cirrusseven.com Tue Oct 23 16:57:12 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 23 Oct 2018 12:57:12 -0400 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> Message-ID: So in your other email you said asked if there was a guide for deploying it with Kolla ansible... Oh boy. No there's not. I don't know if you've seen my recent mails on Octavia, but I am going through this deployment process with kolla-ansible right now and it is lacking in a few areas. If you plan to use different CA certificates for client and server in Octavia, you'll need to add that into the playbook. Presently it only copies over ca_01.pem, cacert.key, and client.pem and uses them for everything. I was completely unable to make it work with only one CA as I got some SSL errors. It passes gate though, so I aasume it must work? I dunno. Networking comments and a really messy kolla-ansible / octavia how-to below... On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann wrote: > > Am 10/23/18 um 3:20 PM schrieb Erik McCormick: > > On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann > > wrote: > >> > >> Hi, > >> > >> We did test Octavia with Pike (DVR deployment) and everything was > >> working right our of the box. We changed our underlay network to a > >> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted > >> to have that much cables in a rack. > >> > >> Octavia is not working right now as the lb-mgmt-net does not exist on > >> the compute nodes nor does a br-ex. > >> > >> The control nodes running > >> > >> octavia_worker > >> octavia_housekeeping > >> octavia_health_manager > >> octavia_api > >> > >> and as far as I understood octavia_worker, octavia_housekeeping and > >> octavia_health_manager have to talk to the amphora instances. But the > >> control nodes are spread over three different leafs. So each control > >> node in a different L2 domain. > >> > >> So the question is how to deploy a lb-mgmt-net network in our setup? > >> > >> - Compute nodes have no "stretched" L2 domain > >> - Control nodes, compute nodes and network nodes are in L3 networks like > >> api, storage, ... > >> - Only network nodes are connected to a L2 domain (with a separated NIC) > >> providing the "public" network > >> > > You'll need to add a new bridge to your compute nodes and create a > > provider network associated with that bridge. In my setup this is > > simply a flat network tied to a tagged interface. In your case it > > probably makes more sense to make a new VNI and create a vxlan > > provider network. The routing in your switches should handle the rest. > > Ok that's what I try right now. But I don't get how to setup something > like a VxLAN provider Network. I thought only vlan and flat is supported > as provider network? I guess it is not possible to use the tunnel > interface that is used for tenant networks? > So I have to create a separated VxLAN on the control and compute nodes like: > > # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1 > dev vlan3535 ttl 5 > # ip addr add 172.16.1.11/20 dev vxoctavia > # ip link set vxoctavia up > > and use it like a flat provider network, true? > This is a fine way of doing things, but it's only half the battle. You'll need to add a bridge on the compute nodes and bind it to that new interface. Something like this if you're using openvswitch: docker exec openvswitch_db /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia Also you'll want to remove the IP address from that interface as it's going to be a bridge. Think of it like your public (br-ex) interface on your network nodes. >From there you'll need to update the bridge mappings via kolla overrides. This would usually be in /etc/kolla/config/neutron. Create a subdirectory for your compute inventory group and create an ml2_conf.ini there. So you'd end up with something like: [root at kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini [ml2_type_flat] flat_networks = mgmt-net [ovs] bridge_mappings = mgmt-net:br-mgmt run kolla-ansible --tags neutron reconfigure to push out the new configs. Note that there is a bug where the neutron containers may not restart after the change, so you'll probably need to do a 'docker container restart neutron_openvswitch_agent' on each compute node. At this point, you'll need to create the provider network in the admin project like: openstack network create --provider-network-type flat --provider-physical-network mgmt-net lb-mgmt-net And then create a normal subnet attached to this network with some largeish address scope. I wouldn't use 172.16.0.0/16 because docker uses that by default. I'm not sure if it matters since the network traffic will be isolated on a bridge, but it makes me paranoid so I avoided it. For your controllers, I think you can just let everything function off your api interface since you're routing in your spines. Set up a gateway somewhere from that lb-mgmt network and save yourself the complication of adding an interface to your controllers. If you choose to use a separate interface on your controllers, you'll need to make sure this patch is in your kolla-ansible install or cherry pick it. https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 I don't think that's been backported at all, so unless you're running off master you'll need to go get it. >From here on out, the regular Octavia instruction should serve you. Create a flavor, Create a security group, and capture their UUIDs along with the UUID of the provider network you made. Override them in globals.yml with: octavia_amp_boot_network_list: octavia_amp_secgroup_list: octavia_amp_flavor_id: This is all from my scattered notes and bad memory. Hopefully it makes sense. Corrections welcome. -Erik > > > > > > -Erik > >> > >> All the best, > >> Florian > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From johnsomor at gmail.com Tue Oct 23 17:09:13 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 23 Oct 2018 10:09:13 -0700 Subject: [Openstack-operators] [OCTAVIA][QUEENS][KOLLA] - Amphora to Health-manager invalid UDP heartbeat. In-Reply-To: References: Message-ID: Are the controller and the amphora using the same version of Octavia? We had a python3 issue where we had to change the HMAC digest used. If you controller is running an older version of Octavia than your amphora images, it may not have the compatibility code to support the new format. The compatibility code is here: https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/health_daemon/status_message.py#L56 There is also a release note about the issue here: https://docs.openstack.org/releasenotes/octavia/rocky.html#upgrade-notes If that is not the issue, I would double check the heartbeat_key in the health manager configuration files and inside one of the amphora. Note, that this key is only used for health heartbeats and stats, it is not used for the controller to amphora communication on port 9443. Also, load balancers cannot get "stuck" in PENDING_* states unless someone has killed the controller process that was actively working on that load balancer. By killed I mean a non-graceful shutdown of the process that was in the middle of working on the load balancer. Otherwise all code paths lead back to ACTIVE or ERROR status after it finishes the work or gives up retrying the requested action. Check your controller logs to make sure this load balancer is not still being worked on by one of the controllers. The default retry timeouts (some are up to 25 minutes) are very long (it will keep trying to accomplish the request) to accommodate very slow (virtual box) hosts and the test gates. You will want to tune those down for a production deployment. Michael On Tue, Oct 23, 2018 at 7:09 AM Gaël THEROND wrote: > > Hi guys, > > I'm finishing to work on my POC for Octavia and after solving few issues with my configuration I'm close to get a properly working setup. > However, I'm facing a small but yet annoying bug with the health-manager receiving amphora heartbeat UDP packet which it consider as not correct and so drop it. > > Here are the messages that can be found in logs: > > 2018-10-23 13:53:21.844 25 WARNING octavia.amphorae.backends.health_daemon.status_message [-] calculated hmac: faf73e41a0f843b826ee581c3995b7f7e56b5e5a294fca0b84eda426766f8415 not equal to msg hmac: 6137613337316432636365393832376431343337306537353066626130653261 dropping packet > > Which come from this part of the HM Code: > > https://docs.openstack.org/octavia/pike/_modules/octavia/amphorae/backends/health_daemon/status_message.html#get_payload > > The annoying thing is that I don't get why the UDP packet is considered as stale and how can I try to reproduce the payload which is send to the HealthManager. > I'm willing to write a simple PY program to simulate the heartbeat payload but I don't now what's exactly the message and I think I miss some informations. > > Both HealthManager and the Amphora do use the same heartbeat_key and both can contact on the network as the initial Health-manager to Amphora 9443 connection is validated. > > As an effect to this situation, my loadbalancer is stuck in PENDING_UPDATE mode. > > Do you have any idea on how can I handle such thing or if it's something already seen out there for anyone else? > > Kind regards, > G. > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From johnsomor at gmail.com Tue Oct 23 17:48:42 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 23 Oct 2018 10:48:42 -0700 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> Message-ID: I am still catching up on e-mail from the weekend. There are a lot of different options for how to implement the lb-mgmt-network for the controller<->amphora communication. I can't talk to what options Kolla provides, but I can talk to how Octavia works. One thing to note on the lb-mgmt-net issue, if you can setup routes such that the controllers can reach the IP addresses used for the lb-mgmt-net, and that the amphora can reach the controllers, Octavia can run with a routed lb-mgmt-net setup. There is no L2 requirement between the controllers and the amphora instances. Michael On Tue, Oct 23, 2018 at 9:57 AM Erik McCormick wrote: > > So in your other email you said asked if there was a guide for > deploying it with Kolla ansible... > > Oh boy. No there's not. I don't know if you've seen my recent mails on > Octavia, but I am going through this deployment process with > kolla-ansible right now and it is lacking in a few areas. > > If you plan to use different CA certificates for client and server in > Octavia, you'll need to add that into the playbook. Presently it only > copies over ca_01.pem, cacert.key, and client.pem and uses them for > everything. I was completely unable to make it work with only one CA > as I got some SSL errors. It passes gate though, so I aasume it must > work? I dunno. > > Networking comments and a really messy kolla-ansible / octavia how-to below... > > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann > wrote: > > > > Am 10/23/18 um 3:20 PM schrieb Erik McCormick: > > > On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann > > > wrote: > > >> > > >> Hi, > > >> > > >> We did test Octavia with Pike (DVR deployment) and everything was > > >> working right our of the box. We changed our underlay network to a > > >> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted > > >> to have that much cables in a rack. > > >> > > >> Octavia is not working right now as the lb-mgmt-net does not exist on > > >> the compute nodes nor does a br-ex. > > >> > > >> The control nodes running > > >> > > >> octavia_worker > > >> octavia_housekeeping > > >> octavia_health_manager > > >> octavia_api > > >> > > >> and as far as I understood octavia_worker, octavia_housekeeping and > > >> octavia_health_manager have to talk to the amphora instances. But the > > >> control nodes are spread over three different leafs. So each control > > >> node in a different L2 domain. > > >> > > >> So the question is how to deploy a lb-mgmt-net network in our setup? > > >> > > >> - Compute nodes have no "stretched" L2 domain > > >> - Control nodes, compute nodes and network nodes are in L3 networks like > > >> api, storage, ... > > >> - Only network nodes are connected to a L2 domain (with a separated NIC) > > >> providing the "public" network > > >> > > > You'll need to add a new bridge to your compute nodes and create a > > > provider network associated with that bridge. In my setup this is > > > simply a flat network tied to a tagged interface. In your case it > > > probably makes more sense to make a new VNI and create a vxlan > > > provider network. The routing in your switches should handle the rest. > > > > Ok that's what I try right now. But I don't get how to setup something > > like a VxLAN provider Network. I thought only vlan and flat is supported > > as provider network? I guess it is not possible to use the tunnel > > interface that is used for tenant networks? > > So I have to create a separated VxLAN on the control and compute nodes like: > > > > # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1 > > dev vlan3535 ttl 5 > > # ip addr add 172.16.1.11/20 dev vxoctavia > > # ip link set vxoctavia up > > > > and use it like a flat provider network, true? > > > This is a fine way of doing things, but it's only half the battle. > You'll need to add a bridge on the compute nodes and bind it to that > new interface. Something like this if you're using openvswitch: > > docker exec openvswitch_db > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia > > Also you'll want to remove the IP address from that interface as it's > going to be a bridge. Think of it like your public (br-ex) interface > on your network nodes. > > From there you'll need to update the bridge mappings via kolla > overrides. This would usually be in /etc/kolla/config/neutron. Create > a subdirectory for your compute inventory group and create an > ml2_conf.ini there. So you'd end up with something like: > > [root at kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini > [ml2_type_flat] > flat_networks = mgmt-net > > [ovs] > bridge_mappings = mgmt-net:br-mgmt > > run kolla-ansible --tags neutron reconfigure to push out the new > configs. Note that there is a bug where the neutron containers may not > restart after the change, so you'll probably need to do a 'docker > container restart neutron_openvswitch_agent' on each compute node. > > At this point, you'll need to create the provider network in the admin > project like: > > openstack network create --provider-network-type flat > --provider-physical-network mgmt-net lb-mgmt-net > > And then create a normal subnet attached to this network with some > largeish address scope. I wouldn't use 172.16.0.0/16 because docker > uses that by default. I'm not sure if it matters since the network > traffic will be isolated on a bridge, but it makes me paranoid so I > avoided it. > > For your controllers, I think you can just let everything function off > your api interface since you're routing in your spines. Set up a > gateway somewhere from that lb-mgmt network and save yourself the > complication of adding an interface to your controllers. If you choose > to use a separate interface on your controllers, you'll need to make > sure this patch is in your kolla-ansible install or cherry pick it. > > https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 > > I don't think that's been backported at all, so unless you're running > off master you'll need to go get it. > > From here on out, the regular Octavia instruction should serve you. > Create a flavor, Create a security group, and capture their UUIDs > along with the UUID of the provider network you made. Override them in > globals.yml with: > > octavia_amp_boot_network_list: > octavia_amp_secgroup_list: > octavia_amp_flavor_id: > > This is all from my scattered notes and bad memory. Hopefully it makes > sense. Corrections welcome. > > -Erik > > > > > > > > > > > > > -Erik > > >> > > >> All the best, > > >> Florian > > >> _______________________________________________ > > >> OpenStack-operators mailing list > > >> OpenStack-operators at lists.openstack.org > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From michael.d.moore at nasa.gov Tue Oct 23 22:21:18 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Tue, 23 Oct 2018 22:21:18 +0000 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: We have submitted a bug for this https://bugs.launchpad.net/glance/+bug/1799588 Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" Date: Saturday, October 20, 2018 at 7:22 PM To: Logan Hicks , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] OpenStack-operators Digest, Vol 96, Issue 7 The images exist and are bootable. I'm going to trace through the actual code for glance API. Any suggestions on where the show/hide logic is when it filters responses? I'm new to digging through OpenStack code. ________________________________ From: Logan Hicks [logan.hicks at live.com] Sent: Friday, October 19, 2018 8:00 PM To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] OpenStack-operators Digest, Vol 96, Issue 7 Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Chris Apsey) I noticed that the image says queued. If Im not mistaken, an image cant have permissions applied until after the image is created, which might explain the issue hes seeing. The object doesnt exist until its made by openstack. Id check to see if something is holding up images being made. Id start with glance. Respectfully, Logan Hicks -------- Original message -------- From: openstack-operators-request at lists.openstack.org Date: 10/19/18 7:49 PM (GMT-05:00) To: openstack-operators at lists.openstack.org Subject: OpenStack-operators Digest, Vol 96, Issue 7 Send OpenStack-operators mailing list submissions to openstack-operators at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators or, via email, send a message with subject or body 'help' to openstack-operators-request at lists.openstack.org You can reach the person managing the list at openstack-operators-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of OpenStack-operators digest..." Today's Topics: 1. [nova] Removing the CachingScheduler (Matt Riedemann) 2. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 3. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Chris Apsey) 4. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (iain MacDonnell) 5. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 6. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (iain MacDonnell) 7. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Chris Apsey) 8. osops-tools-monitoring Dependency problems (Tomáš Vondra) 9. [heat][cinder] How to create stack snapshot including volumes (Christian Zunker) 10. Fleio - OpenStack billing - ver. 1.1 released (Adrian Andreias) 11. Re: [Openstack-sigs] [all] Naming the T release of OpenStack (Tony Breeds) 12. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 13. Re: Glance Image Visibility Issue? - Non admin users can see private images from other tenants (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) 14. Re: Fleio - OpenStack billing - ver. 1.1 released (Jay Pipes) 15. Re: Fleio - OpenStack billing - ver. 1.1 released (Mohammed Naser) 16. [Octavia] SSL errors polling amphorae and missing tenant network interface (Erik McCormick) 17. Re: [Octavia] SSL errors polling amphorae and missing tenant network interface (Gaël THEROND) ---------------------------------------------------------------------- Message: 1 Date: Thu, 18 Oct 2018 17:07:00 -0500 From: Matt Riedemann To: "openstack-operators at lists.openstack.org" Subject: [Openstack-operators] [nova] Removing the CachingScheduler Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed It's been deprecated since Pike, and the time has come to remove it [1]. mgagne has been the most vocal CachingScheduler operator I know and he has tested out the "nova-manage placement heal_allocations" CLI, added in Rocky, and said it will work for migrating his deployment from the CachingScheduler to the FilterScheduler + Placement. If you are using the CachingScheduler and have a problem with its removal, now is the time to speak up or forever hold your peace. [1] https://review.openstack.org/#/c/611723/1 -- Thanks, Matt ------------------------------ Message: 2 Date: Thu, 18 Oct 2018 22:11:40 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: Content-Type: text/plain; charset="utf-8" I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I’m seeing unexpected behavior in our Queens environment related to > Glance image visibility. Specifically users who, based on my > understanding of the visibility and ownership fields, should NOT be able > to see or view the image. > > If I create a new image with openstack image create and specify –project > and –private a non-admin user in a different tenant can see and > boot that image. > > That seems to be the opposite of what should happen. Any ideas? Yep, something's not right there. Are you sure that the user that can see the image doesn't have the admin role (for the project in its keystone token) ? Did you verify that the image's owner is what you intended, and that the visibility really is "private" ? ~iain _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 3 Date: Thu, 18 Oct 2018 18:23:35 -0400 From: Chris Apsey To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , iain MacDonnell , Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <1668946da70.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> Content-Type: text/plain; format=flowed; charset="UTF-8" Do you have a liberal/custom policy.json that perhaps is causing unexpected behavior? Can't seem to reproduce this. On October 18, 2018 18:13:22 "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > I have replicated this unexpected behavior in a Pike test environment, in > addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, > INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I > created a new image, set to private with the project defined as our admin > tenant. > > In the database I can see that the image is 'private' and the owner is the > ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> I’m seeing unexpected behavior in our Queens environment related to >> Glance image visibility. Specifically users who, based on my >> understanding of the visibility and ownership fields, should NOT be able >> to see or view the image. >> >> If I create a new image with openstack image create and specify –project >> and –private a non-admin user in a different tenant can see and >> boot that image. >> >> That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 4 Date: Thu, 18 Oct 2018 15:25:22 -0700 From: iain MacDonnell To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <11e3f7a6-875e-4b6c-259a-147188a860e1 at oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed I suspect that your non-admin user is not really non-admin. How did you create it? What you have for "context_is_admin" in glance's policy.json ? ~iain On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I’m seeing unexpected behavior in our Queens environment related to > > Glance image visibility. Specifically users who, based on my > > understanding of the visibility and ownership fields, should NOT be able > > to see or view the image. > > > > If I create a new image with openstack image create and specify –project > > and –private a non-admin user in a different tenant can see and > > boot that image. > > > > That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > ------------------------------ Message: 5 Date: Thu, 18 Oct 2018 22:32:42 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <44085CC4-899C-49B2-9934-0800F6650B0B at nasa.gov> Content-Type: text/plain; charset="utf-8" openstack user create --domain default --password xxxxxxxx --project-domain ndc --project test mike openstack role add --user mike --user-domain default --project test user my admin account is in the NDC domain with a different username. /etc/glance/policy.json { "context_is_admin": "role:admin", "default": "role:admin", I'm not terribly familiar with the policies but I feel like that default line is making everyone an admin by default? Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: I suspect that your non-admin user is not really non-admin. How did you create it? What you have for "context_is_admin" in glance's policy.json ? ~iain On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I’m seeing unexpected behavior in our Queens environment related to > > Glance image visibility. Specifically users who, based on my > > understanding of the visibility and ownership fields, should NOT be able > > to see or view the image. > > > > If I create a new image with openstack image create and specify –project > > and –private a non-admin user in a different tenant can see and > > boot that image. > > > > That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > ------------------------------ Message: 6 Date: Thu, 18 Oct 2018 15:48:27 -0700 From: iain MacDonnell To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed That all looks fine. I believe that the "default" policy applies in place of any that's not explicitly specified - i.e. "if there's no matching policy below, you need to have the admin role to be able to do it". I do have that line in my policy.json, and I cannot reproduce your problem (see below). I'm not using domains (other than "default"). I wonder if that's a factor... ~iain $ openstack user create --password foo user1 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | d18c0031ec56430499a2d690cb1f125c | | name | user1 | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack user create --password foo user2 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | be9f1061a5104abd834eabe98dff055d | | name | user2 | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack project create project1 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | 826876d6d3724018bae6253c7f540cb3 | | is_domain | False | | name | project1 | | parent_id | default | | tags | [] | +-------------+----------------------------------+ $ openstack project create project2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | b446b93ac6e24d538c1943acbdd13cb2 | | is_domain | False | | name | project2 | | parent_id | default | | tags | [] | +-------------+----------------------------------+ $ openstack role add --user user1 --project project1 _member_ $ openstack role add --user user2 --project project2 _member_ $ export OS_PASSWORD=foo $ export OS_USERNAME=user1 $ export OS_PROJECT_NAME=project1 $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | +--------------------------------------+--------+--------+ $ openstack image create --private image1 +------------------+------------------------------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2018-10-18T22:17:41Z | | disk_format | raw | | file | /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file | | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | | min_disk | 0 | | min_ram | 0 | | name | image1 | | owner | 826876d6d3724018bae6253c7f540cb3 | | properties | locations='[]', os_hash_algo='None', os_hash_value='None', os_hidden='False' | | protected | False | | schema | /v2/schemas/image | | size | None | | status | queued | | tags | | | updated_at | 2018-10-18T22:17:41Z | | virtual_size | None | | visibility | private | +------------------+------------------------------------------------------------------------------+ $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | +--------------------------------------+--------+--------+ $ export OS_USERNAME=user2 $ export OS_PROJECT_NAME=project2 $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | +--------------------------------------+--------+--------+ $ export OS_USERNAME=admin $ export OS_PROJECT_NAME=admin $ export OS_PASSWORD=xxx $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 $ export OS_USERNAME=user2 $ export OS_PROJECT_NAME=project2 $ export OS_PASSWORD=foo $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | +--------------------------------------+--------+--------+ $ On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > openstack user create --domain default --password xxxxxxxx --project-domain ndc --project test mike > > > openstack role add --user mike --user-domain default --project test user > > my admin account is in the NDC domain with a different username. > > > > /etc/glance/policy.json > { > > "context_is_admin": "role:admin", > "default": "role:admin", > > > > > I'm not terribly familiar with the policies but I feel like that default line is making everyone an admin by default? > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: > > > I suspect that your non-admin user is not really non-admin. How did you > create it? > > What you have for "context_is_admin" in glance's policy.json ? > > ~iain > > > On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. > > > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: > > > > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. > > > > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: > > > > > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > > > I’m seeing unexpected behavior in our Queens environment related to > > > Glance image visibility. Specifically users who, based on my > > > understanding of the visibility and ownership fields, should NOT be able > > > to see or view the image. > > > > > > If I create a new image with openstack image create and specify –project > > > and –private a non-admin user in a different tenant can see and > > > boot that image. > > > > > > That seems to be the opposite of what should happen. Any ideas? > > > > Yep, something's not right there. > > > > Are you sure that the user that can see the image doesn't have the admin > > role (for the project in its keystone token) ? > > > > Did you verify that the image's owner is what you intended, and that the > > visibility really is "private" ? > > > > ~iain > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > ------------------------------ Message: 7 Date: Thu, 18 Oct 2018 19:23:42 -0400 From: Chris Apsey To: iain MacDonnell , "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <166897de830.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> Content-Type: text/plain; format=flowed; charset="UTF-8" We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 8 Date: Fri, 19 Oct 2018 10:58:30 +0200 From: Tomáš Vondra To: Subject: [Openstack-operators] osops-tools-monitoring Dependency problems Message-ID: <049e01d46789$e8bf5220$ba3df660$@homeatcloud.cz> Content-Type: text/plain; charset="iso-8859-2" Hi! I'm a long time user of monitoring-for-openstack, also known as oschecks. Concretely, I used a version from 2015 with OpenStack python client libraries from Kilo. Now I have upgraded them to Mitaka and it got broken. Even the latest oschecks don't work. I didn't quite expect that, given that there are several commits from this year e.g. by Nagasai Vinaykumar Kapalavai and paramite. Can one of them or some other user step up and say what version of OpenStack clients is oschecks working with? Ideally, write it down in requirements.txt so that it will be reproducible? Also, some documentation of what is the minimal set of parameters would also come in handy. Thanks a lot, Tomas from Homeatcloud The error messages are as absurd as: oschecks-check_glance_api --os_auth_url='http://10.1.101.30:5000/v2.0' --os_username=monitoring --os_password=XXX --os_tenant_name=monitoring CRITICAL: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 121, in safe_run method() File "/usr/lib/python2.7/dist-packages/oschecks/glance.py", line 29, in _check_glance_api glance = utils.Glance() File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 177, in __init__ self.glance.parser = self.glance.get_base_parser(sys.argv) TypeError: get_base_parser() takes exactly 1 argument (2 given) (I can see 4 parameters on the command line.) ------------------------------ Message: 9 Date: Fri, 19 Oct 2018 11:21:25 +0200 From: Christian Zunker To: openstack-operators Subject: [Openstack-operators] [heat][cinder] How to create stack snapshot including volumes Message-ID: Content-Type: text/plain; charset="utf-8" Hi List, I'd like to take snapshots of heat stacks including the volumes. >From what I found until now, this should be possible. You just have to configure some parts of OpenStack. I enabled cinder-backup with ceph backend. Backups from volumes are working. I configured heat to include the option backups_enabled = True. When I use openstack stack snapshot create, I get a snapshot but no backups of my volumes. I don't get any error messages in heat. Debug logging didn't help either. OpenStack version is Pike on Ubuntu installed with openstack-ansible. heat version is 9.0.3. So this should also include this bugfix: https://bugs.launchpad.net/heat/+bug/1687006 Is anybody using this feature? What am I missing? Best regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 10 Date: Fri, 19 Oct 2018 12:42:00 +0300 From: Adrian Andreias To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released Message-ID: Content-Type: text/plain; charset="utf-8" Hello, We've just released Fleio version 1.1. Fleio is a billing solution and control panel for OpenStack public clouds and traditional web hosters. Fleio software automates the entire process for cloud users. New customers can use Fleio to sign up for an account, pay invoices, add credit to their account, as well as create and manage cloud resources such as virtual machines, storage and networking. Full feature list: https://fleio.com#features You can see an online demo: https://fleio.com/demo And sign-up for a free trial: https://fleio.com/signup Cheers! - Adrian Andreias https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 11 Date: Fri, 19 Oct 2018 20:54:29 +1100 From: Tony Breeds To: OpenStack Development , OpenStack SIGs , OpenStack Operators Subject: Re: [Openstack-operators] [Openstack-sigs] [all] Naming the T release of OpenStack Message-ID: <20181019095428.GA9399 at thor.bakeyournoodle.com> Content-Type: text/plain; charset="utf-8" On Thu, Oct 18, 2018 at 05:35:39PM +1100, Tony Breeds wrote: > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train I have re-worked my openstack/governance change[1] to ask the TC to accept adding Train to the poll as (partially) described in [2]. I present the names above to the community and Foundation marketing team for consideration. The list above does contain Train, clearly if the TC do not approve [1] Train will not be included in the poll when created. I apologise for any offence or slight caused by my previous email in this thread. It was well intentioned albeit, with hindsight, poorly thought through. Yours Tony. [1] https://review.openstack.org/#/c/611511/ [2] https://governance.openstack.org/tc/reference/release-naming.html#release-name-criteria -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: ------------------------------ Message: 12 Date: Fri, 19 Oct 2018 16:33:17 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: Chris Apsey , iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <4704898B-D193-4540-B106-BF38ACAB68E2 at nasa.gov> Content-Type: text/plain; charset="utf-8" Our NDC domain is LDAP backed. Default is not. Our keystone policy.json file is empty {} Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 7:24 PM, "Chris Apsey" wrote: We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 13 Date: Fri, 19 Oct 2018 16:54:12 +0000 From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" To: Chris Apsey , iain MacDonnell , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: Content-Type: text/plain; charset="utf-8" For reference, here is our full glance policy.json { "context_is_admin": "role:admin", "default": "role:admin", "add_image": "", "delete_image": "", "get_image": "", "get_images": "", "modify_image": "", "publicize_image": "role:admin", "communitize_image": "", "copy_from": "", "download_image": "", "upload_image": "", "delete_image_location": "", "get_image_location": "", "set_image_location": "", "add_member": "", "delete_member": "", "get_member": "", "get_members": "", "modify_member": "", "manage_image_cache": "role:admin", "get_task": "", "get_tasks": "", "add_task": "", "modify_task": "", "tasks_api_access": "role:admin", "deactivate": "", "reactivate": "", "get_metadef_namespace": "", "get_metadef_namespaces":"", "modify_metadef_namespace":"", "add_metadef_namespace":"", "get_metadef_object":"", "get_metadef_objects":"", "modify_metadef_object":"", "add_metadef_object":"", "list_metadef_resource_types":"", "get_metadef_resource_type":"", "add_metadef_resource_type_association":"", "get_metadef_property":"", "get_metadef_properties":"", "modify_metadef_property":"", "add_metadef_property":"", "get_metadef_tag":"", "get_metadef_tags":"", "modify_metadef_tag":"", "add_metadef_tag":"", "add_metadef_tags":"" } Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/19/18, 12:39 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: Our NDC domain is LDAP backed. Default is not. Our keystone policy.json file is empty {} Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/18/18, 7:24 PM, "Chris Apsey" wrote: We are using multiple keystone domains - still can't reproduce this. Do you happen to have a customized keystone policy.json? Worst case, I would launch a devstack of your targeted release. If you can't reproduce the issue there, you would at least know its caused by a nonstandard config rather than a bug (or at least not a bug that's present when using a default config) On October 18, 2018 18:50:12 iain MacDonnell wrote: > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain >> ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default >> line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I have replicated this unexpected behavior in a Pike test environment, in >>> addition to our Queens environment. >>> >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>> INC.]" wrote: >>> >>> Yes. I verified it by creating a non-admin user in a different tenant. I >>> created a new image, set to private with the project defined as our admin >>> tenant. >>> >>> In the database I can see that the image is 'private' and the owner is the >>> ID of the admin tenant. >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>> >>> >>> >>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>> > I’m seeing unexpected behavior in our Queens environment related to >>> > Glance image visibility. Specifically users who, based on my >>> > understanding of the visibility and ownership fields, should NOT be able >>> > to see or view the image. >>> > >>> > If I create a new image with openstack image create and specify –project >>> > and –private a non-admin user in a different tenant can see and >>> > boot that image. >>> > >>> > That seems to be the opposite of what should happen. Any ideas? >>> >>> Yep, something's not right there. >>> >>> Are you sure that the user that can see the image doesn't have the admin >>> role (for the project in its keystone token) ? >>> >>> Did you verify that the image's owner is what you intended, and that the >>> visibility really is "private" ? >>> >>> ~iain >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 14 Date: Fri, 19 Oct 2018 13:45:03 -0400 From: Jay Pipes To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed Please do not use these mailing lists to advertise closed-source/proprietary software solutions. Thank you, -jay On 10/19/2018 05:42 AM, Adrian Andreias wrote: > Hello, > > We've just released Fleio version 1.1. > > Fleio is a billing solution and control panel for OpenStack public > clouds and traditional web hosters. > > Fleio software automates the entire process for cloud users. New > customers can use Fleio to sign up for an account, pay invoices, add > credit to their account, as well as create and manage cloud resources > such as virtual machines, storage and networking. > > Full feature list: > https://fleio.com#features > > You can see an online demo: > https://fleio.com/demo > > And sign-up for a free trial: > https://fleio.com/signup > > > > Cheers! > > - Adrian Andreias > https://fleio.com > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > ------------------------------ Message: 15 Date: Fri, 19 Oct 2018 20:13:40 +0200 From: Mohammed Naser To: jaypipes at gmail.com Cc: openstack-operators Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released Message-ID: Content-Type: text/plain; charset="UTF-8" On Fri, Oct 19, 2018 at 7:45 PM Jay Pipes wrote: > > Please do not use these mailing lists to advertise > closed-source/proprietary software solutions. +1 > Thank you, > -jay > > On 10/19/2018 05:42 AM, Adrian Andreias wrote: > > Hello, > > > > We've just released Fleio version 1.1. > > > > Fleio is a billing solution and control panel for OpenStack public > > clouds and traditional web hosters. > > > > Fleio software automates the entire process for cloud users. New > > customers can use Fleio to sign up for an account, pay invoices, add > > credit to their account, as well as create and manage cloud resources > > such as virtual machines, storage and networking. > > > > Full feature list: > > https://fleio.com#features > > > > You can see an online demo: > > https://fleio.com/demo > > > > And sign-up for a free trial: > > https://fleio.com/signup > > > > > > > > Cheers! > > > > - Adrian Andreias > > https://fleio.com > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com ------------------------------ Message: 16 Date: Fri, 19 Oct 2018 14:39:29 -0400 From: Erik McCormick To: openstack-operators Subject: [Openstack-operators] [Octavia] SSL errors polling amphorae and missing tenant network interface Message-ID: Content-Type: text/plain; charset="UTF-8" I've been wrestling with getting Octavia up and running and have become stuck on two issues. I'm hoping someone has run into these before. My google foo has come up empty. Issue 1: When the Octavia controller tries to poll the amphora instance, it tries repeatedly and eventually fails. The error on the controller side is: 2018-10-19 14:17:39.181 26 ERROR octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries (currently set to 300) exhausted. The amphora is unavailable. Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) On the amphora side I see: [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure (_ssl.c:1754) I've generated certificates both with the script in the Octavia git repo, and with the Openstack Ansible playbook. I can see that they are present in /etc/octavia/certs. I'm using the Kolla (Queens) containers for the control plane so I'm sure I've satisfied all the python library constraints. Issue 2: I"m not sure how it gets configured, but the tenant network interface (ens6) never comes up. I can spawn other instances on that network with no issue, and I can see that Neutron has the port attached to the instance. However, in the instance this is all I get: ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 9000 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe30:c460/64 scope link valid_lft forever preferred_lft forever 3: ens6: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff There's no evidence of the interface anywhere else including udev rules. Any help with either or both issues would be greatly appreciated. Cheers, Erik ------------------------------ Message: 17 Date: Sat, 20 Oct 2018 01:47:42 +0200 From: Gaël THEROND To: Erik McCormick Cc: openstack-operators Subject: Re: [Openstack-operators] [Octavia] SSL errors polling amphorae and missing tenant network interface Message-ID: Content-Type: text/plain; charset="utf-8" Hi eric! Glad I’m not the only one having this issue with the ssl communication between the amphora and the CP. Even if I don’t yet get a clear answer regarding that issue, I think your second issue is not an issue as the interface is mounted on a namespace and so you’ll need to list all nic even those from namespace. Use an ip netns ls to get the namespace. Hope it will help. Le ven. 19 oct. 2018 à 20:40, Erik McCormick a écrit : > I've been wrestling with getting Octavia up and running and have > become stuck on two issues. I'm hoping someone has run into these > before. My google foo has come up empty. > > Issue 1: > When the Octavia controller tries to poll the amphora instance, it > tries repeatedly and eventually fails. The error on the controller > side is: > > 2018-10-19 14:17:39.181 26 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > retries (currently set to 300) exhausted. The amphora is unavailable. > Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)) > > On the amphora side I see: > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > failure (_ssl.c:1754) > > I've generated certificates both with the script in the Octavia git > repo, and with the Openstack Ansible playbook. I can see that they are > present in /etc/octavia/certs. > > I'm using the Kolla (Queens) containers for the control plane so I'm > sure I've satisfied all the python library constraints. > > Issue 2: > I"m not sure how it gets configured, but the tenant network interface > (ens6) never comes up. I can spawn other instances on that network > with no issue, and I can see that Neutron has the port attached to the > instance. However, in the instance this is all I get: > > ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: ens3: mtu 9000 qdisc pfifo_fast > state UP group default qlen 1000 > link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe30:c460/64 scope link > valid_lft forever preferred_lft forever > 3: ens6: mtu 1500 qdisc noop state DOWN group > default qlen 1000 > link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > There's no evidence of the interface anywhere else including udev rules. > > Any help with either or both issues would be greatly appreciated. > > Cheers, > Erik > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ End of OpenStack-operators Digest, Vol 96, Issue 7 ************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue Oct 23 22:26:04 2018 From: gael.therond at gmail.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 24 Oct 2018 00:26:04 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> Message-ID: For the records, I’m actually working on a fairly large overhaul of our Openstack services deployement using Kolla-Ansible. We’re leveraging kolla-ansible to as smoothly as possible migrate all ou legacy architecture to a shiny new using exactly the same topology as described upper (Using cumulus/calico etc). One of the new services that we try to provide with such method is Octavia. As I too faced some trouble I find them not that hard to solve either by reading carefully the current APIs ref, guides available and source code or by asking for help right here. People responding to octavia’s questions are IMHO blazing fast and really clear and add great details about internals mechanisms which is really appreciated. As I’ve almost finish our own deployment I had noted almost all pitfalls that I faced and which part of the documentation that was missing. I’ll finish my deployment and test and redact a clean (and I hope as complet as possible) documentation as I feel it’s something really needed. On a side note regarding CA and SSL I had an issue that I solved by correctly rebuilding my amphora. Another tip and trick here is to use Barbican when possible as it really help a lot. I hope it can help anyone else willing to use Octavia as I truly think this service is a huge addition to Openstack and its gaining more and more momentum since the Pike/Queens releases. Le mar. 23 oct. 2018 à 19:49, Michael Johnson a écrit : > I am still catching up on e-mail from the weekend. > > There are a lot of different options for how to implement the > lb-mgmt-network for the controller<->amphora communication. I can't > talk to what options Kolla provides, but I can talk to how Octavia > works. > > One thing to note on the lb-mgmt-net issue, if you can setup routes > such that the controllers can reach the IP addresses used for the > lb-mgmt-net, and that the amphora can reach the controllers, Octavia > can run with a routed lb-mgmt-net setup. There is no L2 requirement > between the controllers and the amphora instances. > > Michael > > On Tue, Oct 23, 2018 at 9:57 AM Erik McCormick > wrote: > > > > So in your other email you said asked if there was a guide for > > deploying it with Kolla ansible... > > > > Oh boy. No there's not. I don't know if you've seen my recent mails on > > Octavia, but I am going through this deployment process with > > kolla-ansible right now and it is lacking in a few areas. > > > > If you plan to use different CA certificates for client and server in > > Octavia, you'll need to add that into the playbook. Presently it only > > copies over ca_01.pem, cacert.key, and client.pem and uses them for > > everything. I was completely unable to make it work with only one CA > > as I got some SSL errors. It passes gate though, so I aasume it must > > work? I dunno. > > > > Networking comments and a really messy kolla-ansible / octavia how-to > below... > > > > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann > > wrote: > > > > > > Am 10/23/18 um 3:20 PM schrieb Erik McCormick: > > > > On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann > > > > wrote: > > > >> > > > >> Hi, > > > >> > > > >> We did test Octavia with Pike (DVR deployment) and everything was > > > >> working right our of the box. We changed our underlay network to a > > > >> Layer3 spine-leaf network now and did not deploy DVR as we don't > wanted > > > >> to have that much cables in a rack. > > > >> > > > >> Octavia is not working right now as the lb-mgmt-net does not exist > on > > > >> the compute nodes nor does a br-ex. > > > >> > > > >> The control nodes running > > > >> > > > >> octavia_worker > > > >> octavia_housekeeping > > > >> octavia_health_manager > > > >> octavia_api > > > >> > > > >> and as far as I understood octavia_worker, octavia_housekeeping and > > > >> octavia_health_manager have to talk to the amphora instances. But > the > > > >> control nodes are spread over three different leafs. So each control > > > >> node in a different L2 domain. > > > >> > > > >> So the question is how to deploy a lb-mgmt-net network in our setup? > > > >> > > > >> - Compute nodes have no "stretched" L2 domain > > > >> - Control nodes, compute nodes and network nodes are in L3 networks > like > > > >> api, storage, ... > > > >> - Only network nodes are connected to a L2 domain (with a separated > NIC) > > > >> providing the "public" network > > > >> > > > > You'll need to add a new bridge to your compute nodes and create a > > > > provider network associated with that bridge. In my setup this is > > > > simply a flat network tied to a tagged interface. In your case it > > > > probably makes more sense to make a new VNI and create a vxlan > > > > provider network. The routing in your switches should handle the > rest. > > > > > > Ok that's what I try right now. But I don't get how to setup something > > > like a VxLAN provider Network. I thought only vlan and flat is > supported > > > as provider network? I guess it is not possible to use the tunnel > > > interface that is used for tenant networks? > > > So I have to create a separated VxLAN on the control and compute nodes > like: > > > > > > # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1 > > > dev vlan3535 ttl 5 > > > # ip addr add 172.16.1.11/20 dev vxoctavia > > > # ip link set vxoctavia up > > > > > > and use it like a flat provider network, true? > > > > > This is a fine way of doing things, but it's only half the battle. > > You'll need to add a bridge on the compute nodes and bind it to that > > new interface. Something like this if you're using openvswitch: > > > > docker exec openvswitch_db > > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia > > > > Also you'll want to remove the IP address from that interface as it's > > going to be a bridge. Think of it like your public (br-ex) interface > > on your network nodes. > > > > From there you'll need to update the bridge mappings via kolla > > overrides. This would usually be in /etc/kolla/config/neutron. Create > > a subdirectory for your compute inventory group and create an > > ml2_conf.ini there. So you'd end up with something like: > > > > [root at kolla-deploy ~]# cat > /etc/kolla/config/neutron/compute/ml2_conf.ini > > [ml2_type_flat] > > flat_networks = mgmt-net > > > > [ovs] > > bridge_mappings = mgmt-net:br-mgmt > > > > run kolla-ansible --tags neutron reconfigure to push out the new > > configs. Note that there is a bug where the neutron containers may not > > restart after the change, so you'll probably need to do a 'docker > > container restart neutron_openvswitch_agent' on each compute node. > > > > At this point, you'll need to create the provider network in the admin > > project like: > > > > openstack network create --provider-network-type flat > > --provider-physical-network mgmt-net lb-mgmt-net > > > > And then create a normal subnet attached to this network with some > > largeish address scope. I wouldn't use 172.16.0.0/16 because docker > > uses that by default. I'm not sure if it matters since the network > > traffic will be isolated on a bridge, but it makes me paranoid so I > > avoided it. > > > > For your controllers, I think you can just let everything function off > > your api interface since you're routing in your spines. Set up a > > gateway somewhere from that lb-mgmt network and save yourself the > > complication of adding an interface to your controllers. If you choose > > to use a separate interface on your controllers, you'll need to make > > sure this patch is in your kolla-ansible install or cherry pick it. > > > > > https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 > > > > I don't think that's been backported at all, so unless you're running > > off master you'll need to go get it. > > > > From here on out, the regular Octavia instruction should serve you. > > Create a flavor, Create a security group, and capture their UUIDs > > along with the UUID of the provider network you made. Override them in > > globals.yml with: > > > > octavia_amp_boot_network_list: > > octavia_amp_secgroup_list: > > octavia_amp_flavor_id: > > > > This is all from my scattered notes and bad memory. Hopefully it makes > > sense. Corrections welcome. > > > > -Erik > > > > > > > > > > > > > > > > > > > > -Erik > > > >> > > > >> All the best, > > > >> Florian > > > >> _______________________________________________ > > > >> OpenStack-operators mailing list > > > >> OpenStack-operators at lists.openstack.org > > > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iain.macdonnell at oracle.com Tue Oct 23 23:45:23 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Tue, 23 Oct 2018 16:45:23 -0700 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: References: Message-ID: It (still) seems like there's something funky about admin/non-admin in your case. You could try "openstack --debug token issue" (in the admin and non-admin cases), and examine the token dict that gets output. Look for the "roles" list and "is_admin_project". ~iain On 10/23/2018 03:21 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > We have submitted a bug for this > > https://bugs.launchpad.net/glance/+bug/1799588 > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > ** > > Hydrogen fusion brightens my day. > > *From: *"Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > *Date: *Saturday, October 20, 2018 at 7:22 PM > *To: *Logan Hicks , > "openstack-operators at lists.openstack.org" > > *Subject: *Re: [Openstack-operators] OpenStack-operators Digest, Vol 96, > Issue 7 > > The images exist and are bootable. I'm going to trace through the actual > code for glance API. Any suggestions on where the show/hide logic is > when it filters responses? I'm new to digging through OpenStack code. > > ------------------------------------------------------------------------ > > *From:*Logan Hicks [logan.hicks at live.com] > *Sent:* Friday, October 19, 2018 8:00 PM > *To:* openstack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] OpenStack-operators Digest, Vol 96, > Issue 7 > > Re: Glance Image Visibility Issue? - Non  admin users can see >       private images from other tenants (Chris Apsey) > > I noticed that the image says queued. If Im not mistaken, an image cant > have permissions applied until after the image is created, which might > explain the issue hes seeing. > > The object doesnt exist until its made by openstack. > > Id check to see if something is holding up images being made. Id start > with glance. > > Respectfully, > > Logan Hicks > > -------- Original message -------- > > From: openstack-operators-request at lists.openstack.org > > Date: 10/19/18 7:49 PM (GMT-05:00) > > To: openstack-operators at lists.openstack.org > > Subject: OpenStack-operators Digest, Vol 96, Issue 7 > > Send OpenStack-operators mailing list submissions to >         openstack-operators at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > or, via email, send a message with subject or body 'help' to >         openstack-operators-request at lists.openstack.org > > You can reach the person managing the list at >         openstack-operators-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of OpenStack-operators digest..." > > > Today's Topics: > >    1. [nova] Removing the CachingScheduler (Matt Riedemann) >    2. Re: Glance Image Visibility Issue? - Non admin users can see >       private images from other tenants >       (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) >    3. Re: Glance Image Visibility Issue? - Non  admin users can see >       private images from other tenants (Chris Apsey) >    4. Re: Glance Image Visibility Issue? - Non admin users can see >       private images from other tenants (iain MacDonnell) >    5. Re: Glance Image Visibility Issue? - Non admin users can see >       private images from other tenants >       (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) >    6. Re: Glance Image Visibility Issue? - Non admin users can see >       private images from other tenants (iain MacDonnell) >    7. Re: Glance Image Visibility Issue? - Non  admin users can see >       private images from other tenants (Chris Apsey) >    8. osops-tools-monitoring Dependency problems (Tomáš Vondra) >    9. [heat][cinder] How to create stack snapshot       including volumes >       (Christian Zunker) >   10. Fleio - OpenStack billing - ver. 1.1 released (Adrian Andreias) >   11. Re: [Openstack-sigs] [all] Naming the T   release of OpenStack >       (Tony Breeds) >   12. Re: Glance Image Visibility Issue? - Non admin users can see >       private images from other tenants >       (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) >   13. Re: Glance Image Visibility Issue? - Non admin users can see >       private images from other tenants >       (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) >   14. Re: Fleio - OpenStack billing - ver. 1.1 released (Jay Pipes) >   15. Re: Fleio - OpenStack billing - ver. 1.1  released (Mohammed Naser) >   16. [Octavia] SSL errors polling amphorae and missing tenant >       network interface (Erik McCormick) >   17. Re: [Octavia] SSL errors polling amphorae and missing tenant >       network interface (Gaël THEROND) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 18 Oct 2018 17:07:00 -0500 > From: Matt Riedemann > To: "openstack-operators at lists.openstack.org" >         > Subject: [Openstack-operators] [nova] Removing the CachingScheduler > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > It's been deprecated since Pike, and the time has come to remove it [1]. > > mgagne has been the most vocal CachingScheduler operator I know and he > has tested out the "nova-manage placement heal_allocations" CLI, added > in Rocky, and said it will work for migrating his deployment from the > CachingScheduler to the FilterScheduler + Placement. > > If you are using the CachingScheduler and have a problem with its > removal, now is the time to speak up or forever hold your peace. > > [1] https://review.openstack.org/#/c/611723/1 > > > -- > > Thanks, > > Matt > > > > ------------------------------ > > Message: 2 > Date: Thu, 18 Oct 2018 22:11:40 +0000 > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" >         > To: iain MacDonnell , >         "openstack-operators at lists.openstack.org" >         > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - >         Non admin users can see private images from other tenants > Message-ID: > Content-Type: text/plain; charset="utf-8" > > I have replicated this unexpected behavior in a Pike test environment, > in addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.]" wrote: > >     Yes. I verified it by creating a non-admin user in a different > tenant. I created a new image, set to private with the project defined > as our admin tenant. > >     In the database I can see that the image is 'private' and the owner > is the ID of the admin tenant. > >     Mike Moore, M.S.S.E. > >     Systems Engineer, Goddard Private Cloud >     Michael.D.Moore at nasa.gov > >     Hydrogen fusion brightens my day. > > >     On 10/18/18, 1:07 AM, "iain MacDonnell" > wrote: > > > >         On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >         INTEGRA, INC.] wrote: >         > I’m seeing unexpected behavior in our Queens environment > related to >         > Glance image visibility. Specifically users who, based on my >         > understanding of the visibility and ownership fields, should > NOT be able >         > to see or view the image. >         > >         > If I create a new image with openstack image create and > specify –project >         > and –private a non-admin user in a different tenant > can see and >         > boot that image. >         > >         > That seems to be the opposite of what should happen. Any ideas? > >         Yep, something's not right there. > >         Are you sure that the user that can see the image doesn't have > the admin >         role (for the project in its keystone token) ? > >         Did you verify that the image's owner is what you intended, and > that the >         visibility really is "private" ? > >              ~iain > >         _______________________________________________ >         OpenStack-operators mailing list >         OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > >     _______________________________________________ >     OpenStack-operators mailing list >     OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > ------------------------------ > > Message: 3 > Date: Thu, 18 Oct 2018 18:23:35 -0400 > From: Chris Apsey > To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" >         , iain MacDonnell >         , >         > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - >         Non     admin users can see private images from other tenants > Message-ID: >         <1668946da70.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> > Content-Type: text/plain; format=flowed; charset="UTF-8" > > Do you have a liberal/custom policy.json that perhaps is causing unexpected > behavior?  Can't seem to reproduce this. > > On October 18, 2018 18:13:22 "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.]" wrote: > >> I have replicated this unexpected behavior in a Pike test environment, in >> addition to our Queens environment. >> >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >> INC.]" wrote: >> >>    Yes. I verified it by creating a non-admin user in a different tenant. I >>    created a new image, set to private with the project defined as our admin >>    tenant. >> >>    In the database I can see that the image is 'private' and the owner is the >>    ID of the admin tenant. >> >>    Mike Moore, M.S.S.E. >> >>    Systems Engineer, Goddard Private Cloud >>    Michael.D.Moore at nasa.gov >> >>    Hydrogen fusion brightens my day. >> >> >>    On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >> >> >> >>        On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>        INTEGRA, INC.] wrote: >>> I’m seeing unexpected behavior in our Queens environment related to >>> Glance image visibility. Specifically users who, based on my >>> understanding of the visibility and ownership fields, should NOT be able >>> to see or view the image. >>> >>> If I create a new image with openstack image create and specify –project >>> and –private a non-admin user in a different tenant can see and >>> boot that image. >>> >>> That seems to be the opposite of what should happen. Any ideas? >> >>        Yep, something's not right there. >> >>        Are you sure that the user that can see the image doesn't have the admin >>        role (for the project in its keystone token) ? >> >>        Did you verify that the image's owner is what you intended, and that the >>        visibility really is "private" ? >> >>             ~iain >> >>        _______________________________________________ >>        OpenStack-operators mailing list >>        OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> >> >>    _______________________________________________ >>    OpenStack-operators mailing list >>    OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > > > ------------------------------ > > Message: 4 > Date: Thu, 18 Oct 2018 15:25:22 -0700 > From: iain MacDonnell > To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" >         , > "openstack-operators at lists.openstack.org" >         > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - >         Non admin users can see private images from other tenants > Message-ID: <11e3f7a6-875e-4b6c-259a-147188a860e1 at oracle.com> > Content-Type: text/plain; charset=utf-8; format=flowed > > > I suspect that your non-admin user is not really non-admin. How did you > create it? > > What you have for "context_is_admin" in glance's policy.json ? > >      ~iain > > > On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. >> >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: >> >>      Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. >> >>      In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. >> >>      Mike Moore, M.S.S.E. >> >>      Systems Engineer, Goddard Private Cloud >>      Michael.D.Moore at nasa.gov >> >>      Hydrogen fusion brightens my day. >> >> >>      On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >> >> >> >>          On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>          INTEGRA, INC.] wrote: >>          > I’m seeing unexpected behavior in our Queens environment related to >>          > Glance image visibility. Specifically users who, based on my >>          > understanding of the visibility and ownership fields, should NOT be able >>          > to see or view the image. >>          > >>          > If I create a new image with openstack image create and specify –project >>          > and –private a non-admin user in a different tenant can see and >>          > boot that image. >>          > >>          > That seems to be the opposite of what should happen. Any ideas? >> >>          Yep, something's not right there. >> >>          Are you sure that the user that can see the image doesn't have the admin >>          role (for the project in its keystone token) ? >> >>          Did you verify that the image's owner is what you intended, and that the >>          visibility really is "private" ? >> >>               ~iain >> >>          _______________________________________________ >>          OpenStack-operators mailing list >>          OpenStack-operators at lists.openstack.org >> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >> >> >>      _______________________________________________ >>      OpenStack-operators mailing list >>      OpenStack-operators at lists.openstack.org >> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >> >> > > > > ------------------------------ > > Message: 5 > Date: Thu, 18 Oct 2018 22:32:42 +0000 > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" >         > To: iain MacDonnell , >         "openstack-operators at lists.openstack.org" >         > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - >         Non admin users can see private images from other tenants > Message-ID: <44085CC4-899C-49B2-9934-0800F6650B0B at nasa.gov> > Content-Type: text/plain; charset="utf-8" > > openstack user create --domain default --password xxxxxxxx > --project-domain ndc --project test mike > > > openstack role add --user mike --user-domain default --project test user > > my admin account is in the NDC domain with a different username. > > > > /etc/glance/policy.json > { > > "context_is_admin":  "role:admin", > "default": "role:admin", > > > > > I'm not terribly familiar with the policies but I feel like that default > line is making everyone an admin by default? > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: > > >     I suspect that your non-admin user is not really non-admin. How did > you >     create it? > >     What you have for "context_is_admin" in glance's policy.json ? > >          ~iain > > >     On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >     INTEGRA, INC.] wrote: >     > I have replicated this unexpected behavior in a Pike test > environment, in addition to our Queens environment. >     > >     > >     > >     > Mike Moore, M.S.S.E. >     > >     > Systems Engineer, Goddard Private Cloud >     > Michael.D.Moore at nasa.gov >     > >     > Hydrogen fusion brightens my day. >     > >     > >     > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.]" wrote: >     > >     >      Yes. I verified it by creating a non-admin user in a > different tenant. I created a new image, set to private with the project > defined as our admin tenant. >     > >     >      In the database I can see that the image is 'private' and > the owner is the ID of the admin tenant. >     > >     >      Mike Moore, M.S.S.E. >     > >     >      Systems Engineer, Goddard Private Cloud >     >      Michael.D.Moore at nasa.gov >     > >     >      Hydrogen fusion brightens my day. >     > >     > >     >      On 10/18/18, 1:07 AM, "iain MacDonnell" > wrote: >     > >     > >     > >     >          On 10/17/2018 12:29 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS >     >          INTEGRA, INC.] wrote: >     >          > I’m seeing unexpected behavior in our Queens > environment related to >     >          > Glance image visibility. Specifically users who, based > on my >     >          > understanding of the visibility and ownership fields, > should NOT be able >     >          > to see or view the image. >     >          > >     >          > If I create a new image with openstack image create > and specify –project >     >          > and –private a non-admin user in a different > tenant can see and >     >          > boot that image. >     >          > >     >          > That seems to be the opposite of what should happen. > Any ideas? >     > >     >          Yep, something's not right there. >     > >     >          Are you sure that the user that can see the image > doesn't have the admin >     >          role (for the project in its keystone token) ? >     > >     >          Did you verify that the image's owner is what you > intended, and that the >     >          visibility really is "private" ? >     > >     >               ~iain >     > >     >          _______________________________________________ >     >          OpenStack-operators mailing list >     >          OpenStack-operators at lists.openstack.org >     > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >     > >     > >     >      _______________________________________________ >     >      OpenStack-operators mailing list >     >      OpenStack-operators at lists.openstack.org >     > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >     > >     > > > > > ------------------------------ > > Message: 6 > Date: Thu, 18 Oct 2018 15:48:27 -0700 > From: iain MacDonnell > To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" >         , > "openstack-operators at lists.openstack.org" >         > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - >         Non admin users can see private images from other tenants > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > >      ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field               | Value                            | > +---------------------+----------------------------------+ > | domain_id           | default                          | > | enabled             | True                             | > | id                  | d18c0031ec56430499a2d690cb1f125c | > | name                | user1                            | > | options             | {}                               | > | password_expires_at | None                             | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field               | Value                            | > +---------------------+----------------------------------+ > | domain_id           | default                          | > | enabled             | True                             | > | id                  | be9f1061a5104abd834eabe98dff055d | > | name                | user2                            | > | options             | {}                               | > | password_expires_at | None                             | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field       | Value                            | > +-------------+----------------------------------+ > | description |                                  | > | domain_id   | default                          | > | enabled     | True                             | > | id          | 826876d6d3724018bae6253c7f540cb3 | > | is_domain   | False                            | > | name        | project1                         | > | parent_id   | default                          | > | tags        | []                               | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field       | Value                            | > +-------------+----------------------------------+ > | description |                                  | > | domain_id   | default                          | > | enabled     | True                             | > | id          | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain   | False                            | > | name        | project2                         | > | parent_id   | default                          | > | tags        | []                               | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID                                   | Name   | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field            | Value >                           | > +------------------+------------------------------------------------------------------------------+ > | checksum         | None >                           | > | container_format | bare >                           | > | created_at       | 2018-10-18T22:17:41Z >                           | > | disk_format      | raw >                           | > | file             | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file >      | > | id               | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 >                           | > | min_disk         | 0 >                           | > | min_ram          | 0 >                           | > | name             | image1 >                           | > | owner            | 826876d6d3724018bae6253c7f540cb3 >                           | > | properties       | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected        | False >                           | > | schema           | /v2/schemas/image >                           | > | size             | None >                           | > | status           | queued >                           | > | tags             | >                           | > | updated_at       | 2018-10-18T22:17:41Z >                           | > | virtual_size     | None >                           | > | visibility       | private >                           | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID                                   | Name   | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID                                   | Name   | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID                                   | Name   | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin":  "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >>      I suspect that your non-admin user is not really non-admin. How did you >>      create it? >> >>      What you have for "context_is_admin" in glance's policy.json ? >> >>           ~iain >> >> >>      On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>      INTEGRA, INC.] wrote: >>      > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. >>      > >>      > >>      > >>      > Mike Moore, M.S.S.E. >>      > >>      > Systems Engineer, Goddard Private Cloud >>      > Michael.D.Moore at nasa.gov >>      > >>      > Hydrogen fusion brightens my day. >>      > >>      > >>      > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: >>      > >>      >      Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. >>      > >>      >      In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. >>      > >>      >      Mike Moore, M.S.S.E. >>      > >>      >      Systems Engineer, Goddard Private Cloud >>      >      Michael.D.Moore at nasa.gov >>      > >>      >      Hydrogen fusion brightens my day. >>      > >>      > >>      >      On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>      > >>      > >>      > >>      >          On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>      >          INTEGRA, INC.] wrote: >>      >          > I’m seeing unexpected behavior in our Queens environment related to >>      >          > Glance image visibility. Specifically users who, based on my >>      >          > understanding of the visibility and ownership fields, should NOT be able >>      >          > to see or view the image. >>      >          > >>      >          > If I create a new image with openstack image create and specify –project >>      >          > and –private a non-admin user in a different tenant can see and >>      >          > boot that image. >>      >          > >>      >          > That seems to be the opposite of what should happen. Any ideas? >>      > >>      >          Yep, something's not right there. >>      > >>      >          Are you sure that the user that can see the image doesn't have the admin >>      >          role (for the project in its keystone token) ? >>      > >>      >          Did you verify that the image's owner is what you intended, and that the >>      >          visibility really is "private" ? >>      > >>      >               ~iain >>      > >>      >          _______________________________________________ >>      >          OpenStack-operators mailing list >>      >          OpenStack-operators at lists.openstack.org >>      > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>      > >>      > >>      >      _______________________________________________ >>      >      OpenStack-operators mailing list >>      >      OpenStack-operators at lists.openstack.org >>      > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>      > >>      > >> >> > > > > ------------------------------ > > Message: 7 > Date: Thu, 18 Oct 2018 19:23:42 -0400 > From: Chris Apsey > To: iain MacDonnell , "Moore, Michael Dane >         (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , >         > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - >         Non     admin users can see private images from other tenants > Message-ID: >         <166897de830.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> > Content-Type: text/plain; format=flowed; charset="UTF-8" > > We are using multiple keystone domains - still can't reproduce this. > > Do you happen to have a customized keystone policy.json? > > Worst case, I would launch a devstack of your targeted release.  If you > can't reproduce the issue there, you would at least know its caused by a > nonstandard config rather than a bug (or at least not a bug that's present > when using a default config) > > On October 18, 2018 18:50:12 iain MacDonnell > wrote: > >> That all looks fine. >> >> I believe that the "default" policy applies in place of any that's not >> explicitly specified - i.e. "if there's no matching policy below, you >> need to have the admin role to be able to do it". I do have that line in >> my policy.json, and I cannot reproduce your problem (see below). >> >> I'm not using domains (other than "default"). I wonder if that's a factor... >> >>     ~iain >> >> >> $ openstack user create --password foo user1 >> +---------------------+----------------------------------+ >> | Field               | Value                            | >> +---------------------+----------------------------------+ >> | domain_id           | default                          | >> | enabled             | True                             | >> | id                  | d18c0031ec56430499a2d690cb1f125c | >> | name                | user1                            | >> | options             | {}                               | >> | password_expires_at | None                             | >> +---------------------+----------------------------------+ >> $ openstack user create --password foo user2 >> +---------------------+----------------------------------+ >> | Field               | Value                            | >> +---------------------+----------------------------------+ >> | domain_id           | default                          | >> | enabled             | True                             | >> | id                  | be9f1061a5104abd834eabe98dff055d | >> | name                | user2                            | >> | options             | {}                               | >> | password_expires_at | None                             | >> +---------------------+----------------------------------+ >> $ openstack project create project1 >> +-------------+----------------------------------+ >> | Field       | Value                            | >> +-------------+----------------------------------+ >> | description |                                  | >> | domain_id   | default                          | >> | enabled     | True                             | >> | id          | 826876d6d3724018bae6253c7f540cb3 | >> | is_domain   | False                            | >> | name        | project1                         | >> | parent_id   | default                          | >> | tags        | []                               | >> +-------------+----------------------------------+ >> $ openstack project create project2 >> +-------------+----------------------------------+ >> | Field       | Value                            | >> +-------------+----------------------------------+ >> | description |                                  | >> | domain_id   | default                          | >> | enabled     | True                             | >> | id          | b446b93ac6e24d538c1943acbdd13cb2 | >> | is_domain   | False                            | >> | name        | project2                         | >> | parent_id   | default                          | >> | tags        | []                               | >> +-------------+----------------------------------+ >> $ openstack role add --user user1 --project project1 _member_ >> $ openstack role add --user user2 --project project2 _member_ >> $ export OS_PASSWORD=foo >> $ export OS_USERNAME=user1 >> $ export OS_PROJECT_NAME=project1 >> $ openstack image list >> +--------------------------------------+--------+--------+ >> | ID                                   | Name   | Status | >> +--------------------------------------+--------+--------+ >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >> +--------------------------------------+--------+--------+ >> $ openstack image create --private image1 >> +------------------+------------------------------------------------------------------------------+ >> | Field            | Value >>                          | >> +------------------+------------------------------------------------------------------------------+ >> | checksum         | None >>                          | >> | container_format | bare >>                          | >> | created_at       | 2018-10-18T22:17:41Z >>                          | >> | disk_format      | raw >>                          | >> | file             | >> /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file >>     | >> | id               | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 >>                          | >> | min_disk         | 0 >>                          | >> | min_ram          | 0 >>                          | >> | name             | image1 >>                          | >> | owner            | 826876d6d3724018bae6253c7f540cb3 >>                          | >> | properties       | locations='[]', os_hash_algo='None', >> os_hash_value='None', os_hidden='False' | >> | protected        | False >>                          | >> | schema           | /v2/schemas/image >>                          | >> | size             | None >>                          | >> | status           | queued >>                          | >> | tags             | >>                          | >> | updated_at       | 2018-10-18T22:17:41Z >>                          | >> | virtual_size     | None >>                          | >> | visibility       | private >>                          | >> +------------------+------------------------------------------------------------------------------+ >> $ openstack image list >> +--------------------------------------+--------+--------+ >> | ID                                   | Name   | Status | >> +--------------------------------------+--------+--------+ >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >> | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | >> +--------------------------------------+--------+--------+ >> $ export OS_USERNAME=user2 >> $ export OS_PROJECT_NAME=project2 >> $ openstack image list >> +--------------------------------------+--------+--------+ >> | ID                                   | Name   | Status | >> +--------------------------------------+--------+--------+ >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >> +--------------------------------------+--------+--------+ >> $ export OS_USERNAME=admin >> $ export OS_PROJECT_NAME=admin >> $ export OS_PASSWORD=xxx >> $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 >> $ export OS_USERNAME=user2 >> $ export OS_PROJECT_NAME=project2 >> $ export OS_PASSWORD=foo >> $ openstack image list >> +--------------------------------------+--------+--------+ >> | ID                                   | Name   | Status | >> +--------------------------------------+--------+--------+ >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >> | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | >> +--------------------------------------+--------+--------+ >> $ >> >> >> On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> openstack user create --domain default --password xxxxxxxx --project-domain >>> ndc --project test mike >>> >>> >>> openstack role add --user mike --user-domain default --project test user >>> >>> my admin account is in the NDC domain with a different username. >>> >>> >>> >>> /etc/glance/policy.json >>> { >>> >>> "context_is_admin":  "role:admin", >>> "default": "role:admin", >>> >>> >>> >>> >>> I'm not terribly familiar with the policies but I feel like that default >>> line is making everyone an admin by default? >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >>> >>> >>> I suspect that your non-admin user is not really non-admin. How did you >>> create it? >>> >>> What you have for "context_is_admin" in glance's policy.json ? >>> >>>  ~iain >>> >>> >>> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>>> I have replicated this unexpected behavior in a Pike test environment, in >>>> addition to our Queens environment. >>>> >>>> >>>> >>>> Mike Moore, M.S.S.E. >>>> >>>> Systems Engineer, Goddard Private Cloud >>>> Michael.D.Moore at nasa.gov >>>> >>>> Hydrogen fusion brightens my day. >>>> >>>> >>>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>>> INC.]" wrote: >>>> >>>>    Yes. I verified it by creating a non-admin user in a different tenant. I >>>>    created a new image, set to private with the project defined as our admin >>>>    tenant. >>>> >>>>    In the database I can see that the image is 'private' and the owner is the >>>>    ID of the admin tenant. >>>> >>>>    Mike Moore, M.S.S.E. >>>> >>>>    Systems Engineer, Goddard Private Cloud >>>>    Michael.D.Moore at nasa.gov >>>> >>>>    Hydrogen fusion brightens my day. >>>> >>>> >>>>    On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>>> >>>> >>>> >>>>        On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>>>        INTEGRA, INC.] wrote: >>>>        > I’m seeing unexpected behavior in our Queens environment related to >>>>        > Glance image visibility. Specifically users who, based on my >>>>        > understanding of the visibility and ownership fields, should NOT be able >>>>        > to see or view the image. >>>>        > >>>>        > If I create a new image with openstack image create and specify –project >>>>        > and –private a non-admin user in a different tenant can see and >>>>        > boot that image. >>>>        > >>>>        > That seems to be the opposite of what should happen. Any ideas? >>>> >>>>        Yep, something's not right there. >>>> >>>>        Are you sure that the user that can see the image doesn't have the admin >>>>        role (for the project in its keystone token) ? >>>> >>>>        Did you verify that the image's owner is what you intended, and that the >>>>        visibility really is "private" ? >>>> >>>>             ~iain >>>> >>>>        _______________________________________________ >>>>        OpenStack-operators mailing list >>>>        OpenStack-operators at lists.openstack.org >>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>>> >>>> >>>>    _______________________________________________ >>>>    OpenStack-operators mailing list >>>>    OpenStack-operators at lists.openstack.org >>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > > > ------------------------------ > > Message: 8 > Date: Fri, 19 Oct 2018 10:58:30 +0200 > From: Tomáš Vondra > To: > Subject: [Openstack-operators] osops-tools-monitoring Dependency >         problems > Message-ID: <049e01d46789$e8bf5220$ba3df660$@homeatcloud.cz> > Content-Type: text/plain;       charset="iso-8859-2" > > Hi! > I'm a long time user of monitoring-for-openstack, also known as oschecks. > Concretely, I used a version from 2015 with OpenStack python client > libraries from Kilo. Now I have upgraded them to Mitaka and it got broken. > Even the latest oschecks don't work. I didn't quite expect that, given that > there are several commits from this year e.g. by Nagasai Vinaykumar > Kapalavai and paramite. Can one of them or some other user step up and say > what version of OpenStack clients is oschecks working with? Ideally, write > it down in requirements.txt so that it will be reproducible? Also, some > documentation of what is the minimal set of parameters would also come in > handy. > Thanks a lot, Tomas from Homeatcloud > > The error messages are as absurd as: > oschecks-check_glance_api --os_auth_url='http://10.1.101.30:5000/v2.0 > ' > --os_username=monitoring --os_password=XXX --os_tenant_name=monitoring > > CRITICAL: Traceback (most recent call last): >   File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 121, in > safe_run >     method() >   File "/usr/lib/python2.7/dist-packages/oschecks/glance.py", line 29, in > _check_glance_api >     glance = utils.Glance() >   File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 177, in > __init__ >     self.glance.parser = self.glance.get_base_parser(sys.argv) > TypeError: get_base_parser() takes exactly 1 argument (2 given) > > (I can see 4 parameters on the command line.) > > > > > ------------------------------ > > Message: 9 > Date: Fri, 19 Oct 2018 11:21:25 +0200 > From: Christian Zunker > To: openstack-operators > Subject: [Openstack-operators] [heat][cinder] How to create stack >         snapshot        including volumes > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > Hi List, > > I'd like to take snapshots of heat stacks including the volumes. >>From what I found until now, this should be possible. You just have to > configure some parts of OpenStack. > > I enabled cinder-backup with ceph backend. Backups from volumes are working. > I configured heat to include the option backups_enabled = True. > > When I use openstack stack snapshot create, I get a snapshot but no backups > of my volumes. I don't get any error messages in heat. Debug logging didn't > help either. > > OpenStack version is Pike on Ubuntu installed with openstack-ansible. > heat version is 9.0.3. So this should also include this bugfix: > https://bugs.launchpad.net/heat/+bug/1687006 > > > Is anybody using this feature? What am I missing? > > Best regards > Christian > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > > ------------------------------ > > Message: 10 > Date: Fri, 19 Oct 2018 12:42:00 +0300 > From: Adrian Andreias > To: openstack-operators at lists.openstack.org > Subject: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 >         released > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > Hello, > > We've just released Fleio version 1.1. > > Fleio is a billing solution and control panel for OpenStack public clouds > and traditional web hosters. > > Fleio software automates the entire process for cloud users. New customers > can use Fleio to sign up for an account, pay invoices, add credit to their > account, as well as create and manage cloud resources such as virtual > machines, storage and networking. > > Full feature list: > https://fleio.com#features > > > You can see an online demo: > https://fleio.com/demo > > > And sign-up for a free trial: > https://fleio.com/signup > > > > > Cheers! > > - Adrian Andreias > https://fleio.com > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > > ------------------------------ > > Message: 11 > Date: Fri, 19 Oct 2018 20:54:29 +1100 > From: Tony Breeds > To: OpenStack Development , >         OpenStack SIGs , OpenStack >         Operators > Subject: Re: [Openstack-operators] [Openstack-sigs] [all] Naming the T >         release of OpenStack > Message-ID: <20181019095428.GA9399 at thor.bakeyournoodle.com> > Content-Type: text/plain; charset="utf-8" > > On Thu, Oct 18, 2018 at 05:35:39PM +1100, Tony Breeds wrote: >> Hello all, >>     As per [1] the nomination period for names for the T release have >> now closed (actually 3 days ago sorry).  The nominated names and any >> qualifying remarks can be seen at2]. >> >> Proposed Names >>  * Tarryall >>  * Teakettle >>  * Teller >>  * Telluride >>  * Thomas >>  * Thornton >>  * Tiger >>  * Tincup >>  * Timnath >>  * Timber >>  * Tiny Town >>  * Torreys >>  * Trail >>  * Trinidad >>  * Treasure >>  * Troublesome >>  * Trussville >>  * Turret >>  * Tyrone >> >> Proposed Names that do not meet the criteria >>  * Train > > I have re-worked my openstack/governance change[1] to ask the TC to accept > adding Train to the poll as (partially) described in [2]. > > I present the names above to the community and Foundation marketing team > for consideration.  The list above does contain Train, clearly if the TC > do not approve [1] Train will not be included in the poll when created. > > I apologise for any offence or slight caused by my previous email in > this thread.  It was well intentioned albeit, with hindsight, poorly > thought through. > > Yours Tony. > > [1] https://review.openstack.org/#/c/611511/ > > [2] > https://governance.openstack.org/tc/reference/release-naming.html#release-name-criteria > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 488 bytes > Desc: not available > URL: > > > > ------------------------------ > > Message: 12 > Date: Fri, 19 Oct 2018 16:33:17 +0000 > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" >         > To: Chris Apsey , iain MacDonnell >         , >         "openstack-operators at lists.openstack.org" >         > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - >         Non admin users can see private images from other tenants > Message-ID: <4704898B-D193-4540-B106-BF38ACAB68E2 at nasa.gov> > Content-Type: text/plain; charset="utf-8" > > Our NDC domain is LDAP backed. Default is not. > > Our keystone policy.json file is empty {} > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 7:24 PM, "Chris Apsey" wrote: > >     We are using multiple keystone domains - still can't reproduce this. > >     Do you happen to have a customized keystone policy.json? > >     Worst case, I would launch a devstack of your targeted release.  If > you >     can't reproduce the issue there, you would at least know its caused > by a >     nonstandard config rather than a bug (or at least not a bug that's > present >     when using a default config) > >     On October 18, 2018 18:50:12 iain MacDonnell > >     wrote: > >     > That all looks fine. >     > >     > I believe that the "default" policy applies in place of any > that's not >     > explicitly specified - i.e. "if there's no matching policy below, you >     > need to have the admin role to be able to do it". I do have that > line in >     > my policy.json, and I cannot reproduce your problem (see below). >     > >     > I'm not using domains (other than "default"). I wonder if that's > a factor... >     > >     >     ~iain >     > >     > >     > $ openstack user create --password foo user1 >     > +---------------------+----------------------------------+ >     > | Field               | Value                            | >     > +---------------------+----------------------------------+ >     > | domain_id           | default                          | >     > | enabled             | True                             | >     > | id                  | d18c0031ec56430499a2d690cb1f125c | >     > | name                | user1                            | >     > | options             | {}                               | >     > | password_expires_at | None                             | >     > +---------------------+----------------------------------+ >     > $ openstack user create --password foo user2 >     > +---------------------+----------------------------------+ >     > | Field               | Value                            | >     > +---------------------+----------------------------------+ >     > | domain_id           | default                          | >     > | enabled             | True                             | >     > | id                  | be9f1061a5104abd834eabe98dff055d | >     > | name                | user2                            | >     > | options             | {}                               | >     > | password_expires_at | None                             | >     > +---------------------+----------------------------------+ >     > $ openstack project create project1 >     > +-------------+----------------------------------+ >     > | Field       | Value                            | >     > +-------------+----------------------------------+ >     > | description |                                  | >     > | domain_id   | default                          | >     > | enabled     | True                             | >     > | id          | 826876d6d3724018bae6253c7f540cb3 | >     > | is_domain   | False                            | >     > | name        | project1                         | >     > | parent_id   | default                          | >     > | tags        | []                               | >     > +-------------+----------------------------------+ >     > $ openstack project create project2 >     > +-------------+----------------------------------+ >     > | Field       | Value                            | >     > +-------------+----------------------------------+ >     > | description |                                  | >     > | domain_id   | default                          | >     > | enabled     | True                             | >     > | id          | b446b93ac6e24d538c1943acbdd13cb2 | >     > | is_domain   | False                            | >     > | name        | project2                         | >     > | parent_id   | default                          | >     > | tags        | []                               | >     > +-------------+----------------------------------+ >     > $ openstack role add --user user1 --project project1 _member_ >     > $ openstack role add --user user2 --project project2 _member_ >     > $ export OS_PASSWORD=foo >     > $ export OS_USERNAME=user1 >     > $ export OS_PROJECT_NAME=project1 >     > $ openstack image list >     > +--------------------------------------+--------+--------+ >     > | ID                                   | Name   | Status | >     > +--------------------------------------+--------+--------+ >     > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >     > +--------------------------------------+--------+--------+ >     > $ openstack image create --private image1 >     > > +------------------+------------------------------------------------------------------------------+ >     > | Field            | Value >     >                          | >     > > +------------------+------------------------------------------------------------------------------+ >     > | checksum         | None >     >                          | >     > | container_format | bare >     >                          | >     > | created_at       | 2018-10-18T22:17:41Z >     >                          | >     > | disk_format      | raw >     >                          | >     > | file             | >     > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file >     >     | >     > | id               | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 >     >                          | >     > | min_disk         | 0 >     >                          | >     > | min_ram          | 0 >     >                          | >     > | name             | image1 >     >                          | >     > | owner            | 826876d6d3724018bae6253c7f540cb3 >     >                          | >     > | properties       | locations='[]', os_hash_algo='None', >     > os_hash_value='None', os_hidden='False' | >     > | protected        | False >     >                          | >     > | schema           | /v2/schemas/image >     >                          | >     > | size             | None >     >                          | >     > | status           | queued >     >                          | >     > | tags             | >     >                          | >     > | updated_at       | 2018-10-18T22:17:41Z >     >                          | >     > | virtual_size     | None >     >                          | >     > | visibility       | private >     >                          | >     > > +------------------+------------------------------------------------------------------------------+ >     > $ openstack image list >     > +--------------------------------------+--------+--------+ >     > | ID                                   | Name   | Status | >     > +--------------------------------------+--------+--------+ >     > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >     > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | >     > +--------------------------------------+--------+--------+ >     > $ export OS_USERNAME=user2 >     > $ export OS_PROJECT_NAME=project2 >     > $ openstack image list >     > +--------------------------------------+--------+--------+ >     > | ID                                   | Name   | Status | >     > +--------------------------------------+--------+--------+ >     > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >     > +--------------------------------------+--------+--------+ >     > $ export OS_USERNAME=admin >     > $ export OS_PROJECT_NAME=admin >     > $ export OS_PASSWORD=xxx >     > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 >     > $ export OS_USERNAME=user2 >     > $ export OS_PROJECT_NAME=project2 >     > $ export OS_PASSWORD=foo >     > $ openstack image list >     > +--------------------------------------+--------+--------+ >     > | ID                                   | Name   | Status | >     > +--------------------------------------+--------+--------+ >     > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >     > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | >     > +--------------------------------------+--------+--------+ >     > $ >     > >     > >     > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >     > INTEGRA, INC.] wrote: >     >> openstack user create --domain default --password xxxxxxxx > --project-domain >     >> ndc --project test mike >     >> >     >> >     >> openstack role add --user mike --user-domain default --project > test user >     >> >     >> my admin account is in the NDC domain with a different username. >     >> >     >> >     >> >     >> /etc/glance/policy.json >     >> { >     >> >     >> "context_is_admin":  "role:admin", >     >> "default": "role:admin", >     >> >     >> >     >> >     >> >     >> I'm not terribly familiar with the policies but I feel like that > default >     >> line is making everyone an admin by default? >     >> >     >> >     >> Mike Moore, M.S.S.E. >     >> >     >> Systems Engineer, Goddard Private Cloud >     >> Michael.D.Moore at nasa.gov >     >> >     >> Hydrogen fusion brightens my day. >     >> >     >> >     >> On 10/18/18, 6:25 PM, "iain MacDonnell" > wrote: >     >> >     >> >     >> I suspect that your non-admin user is not really non-admin. How > did you >     >> create it? >     >> >     >> What you have for "context_is_admin" in glance's policy.json ? >     >> >     >>  ~iain >     >> >     >> >     >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >     >> INTEGRA, INC.] wrote: >     >>> I have replicated this unexpected behavior in a Pike test > environment, in >     >>> addition to our Queens environment. >     >>> >     >>> >     >>> >     >>> Mike Moore, M.S.S.E. >     >>> >     >>> Systems Engineer, Goddard Private Cloud >     >>> Michael.D.Moore at nasa.gov >     >>> >     >>> Hydrogen fusion brightens my day. >     >>> >     >>> >     >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane > (GSFC-720.0)[BUSINESS INTEGRA, >     >>> INC.]" wrote: >     >>> >     >>>    Yes. I verified it by creating a non-admin user in a > different tenant. I >     >>>    created a new image, set to private with the project defined > as our admin >     >>>    tenant. >     >>> >     >>>    In the database I can see that the image is 'private' and > the owner is the >     >>>    ID of the admin tenant. >     >>> >     >>>    Mike Moore, M.S.S.E. >     >>> >     >>>    Systems Engineer, Goddard Private Cloud >     >>>    Michael.D.Moore at nasa.gov >     >>> >     >>>    Hydrogen fusion brightens my day. >     >>> >     >>> >     >>>    On 10/18/18, 1:07 AM, "iain MacDonnell" > wrote: >     >>> >     >>> >     >>> >     >>>        On 10/17/2018 12:29 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS >     >>>        INTEGRA, INC.] wrote: >     >>>        > I’m seeing unexpected behavior in our Queens > environment related to >     >>>        > Glance image visibility. Specifically users who, based > on my >     >>>        > understanding of the visibility and ownership fields, > should NOT be able >     >>>        > to see or view the image. >     >>>        > >     >>>        > If I create a new image with openstack image create > and specify –project >     >>>        > and –private a non-admin user in a different > tenant can see and >     >>>        > boot that image. >     >>>        > >     >>>        > That seems to be the opposite of what should happen. > Any ideas? >     >>> >     >>>        Yep, something's not right there. >     >>> >     >>>        Are you sure that the user that can see the image > doesn't have the admin >     >>>        role (for the project in its keystone token) ? >     >>> >     >>>        Did you verify that the image's owner is what you > intended, and that the >     >>>        visibility really is "private" ? >     >>> >     >>>             ~iain >     >>> >     >>>        _______________________________________________ >     >>>        OpenStack-operators mailing list >     >>>        OpenStack-operators at lists.openstack.org >     >>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >     >>> >     >>> >     >>>    _______________________________________________ >     >>>    OpenStack-operators mailing list >     >>>    OpenStack-operators at lists.openstack.org >     >>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >     > >     > _______________________________________________ >     > OpenStack-operators mailing list >     > OpenStack-operators at lists.openstack.org >     > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > > > ------------------------------ > > Message: 13 > Date: Fri, 19 Oct 2018 16:54:12 +0000 > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" >         > To: Chris Apsey , iain MacDonnell >         , >         "openstack-operators at lists.openstack.org" >         > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - >         Non admin users can see private images from other tenants > Message-ID: > Content-Type: text/plain; charset="utf-8" > > > For reference, here is our full glance policy.json > > > { >     "context_is_admin":  "role:admin", >     "default": "role:admin", > >     "add_image": "", >     "delete_image": "", >     "get_image": "", >     "get_images": "", >     "modify_image": "", >     "publicize_image": "role:admin", >     "communitize_image": "", >     "copy_from": "", > >     "download_image": "", >     "upload_image": "", > >     "delete_image_location": "", >     "get_image_location": "", >     "set_image_location": "", > >     "add_member": "", >     "delete_member": "", >     "get_member": "", >     "get_members": "", >     "modify_member": "", > >     "manage_image_cache": "role:admin", > >     "get_task": "", >     "get_tasks": "", >     "add_task": "", >     "modify_task": "", >     "tasks_api_access": "role:admin", > >     "deactivate": "", >     "reactivate": "", > >     "get_metadef_namespace": "", >     "get_metadef_namespaces":"", >     "modify_metadef_namespace":"", >     "add_metadef_namespace":"", > >     "get_metadef_object":"", >     "get_metadef_objects":"", >     "modify_metadef_object":"", >     "add_metadef_object":"", > >     "list_metadef_resource_types":"", >     "get_metadef_resource_type":"", >     "add_metadef_resource_type_association":"", > >     "get_metadef_property":"", >     "get_metadef_properties":"", >     "modify_metadef_property":"", >     "add_metadef_property":"", > >     "get_metadef_tag":"", >     "get_metadef_tags":"", >     "modify_metadef_tag":"", >     "add_metadef_tag":"", >     "add_metadef_tags":"" > > } > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/19/18, 12:39 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.]" wrote: > >     Our NDC domain is LDAP backed. Default is not. > >     Our keystone policy.json file is empty {} > > > >     Mike Moore, M.S.S.E. > >     Systems Engineer, Goddard Private Cloud >     Michael.D.Moore at nasa.gov > >     Hydrogen fusion brightens my day. > > >     On 10/18/18, 7:24 PM, "Chris Apsey" wrote: > >         We are using multiple keystone domains - still can't reproduce > this. > >         Do you happen to have a customized keystone policy.json? > >         Worst case, I would launch a devstack of your targeted > release.  If you >         can't reproduce the issue there, you would at least know its > caused by a >         nonstandard config rather than a bug (or at least not a bug > that's present >         when using a default config) > >         On October 18, 2018 18:50:12 iain MacDonnell > >         wrote: > >         > That all looks fine. >         > >         > I believe that the "default" policy applies in place of any > that's not >         > explicitly specified - i.e. "if there's no matching policy > below, you >         > need to have the admin role to be able to do it". I do have > that line in >         > my policy.json, and I cannot reproduce your problem (see below). >         > >         > I'm not using domains (other than "default"). I wonder if > that's a factor... >         > >         >     ~iain >         > >         > >         > $ openstack user create --password foo user1 >         > +---------------------+----------------------------------+ >         > | Field               | Value                            | >         > +---------------------+----------------------------------+ >         > | domain_id           | default                          | >         > | enabled             | True                             | >         > | id                  | d18c0031ec56430499a2d690cb1f125c | >         > | name                | user1                            | >         > | options             | {}                               | >         > | password_expires_at | None                             | >         > +---------------------+----------------------------------+ >         > $ openstack user create --password foo user2 >         > +---------------------+----------------------------------+ >         > | Field               | Value                            | >         > +---------------------+----------------------------------+ >         > | domain_id           | default                          | >         > | enabled             | True                             | >         > | id                  | be9f1061a5104abd834eabe98dff055d | >         > | name                | user2                            | >         > | options             | {}                               | >         > | password_expires_at | None                             | >         > +---------------------+----------------------------------+ >         > $ openstack project create project1 >         > +-------------+----------------------------------+ >         > | Field       | Value                            | >         > +-------------+----------------------------------+ >         > | description |                                  | >         > | domain_id   | default                          | >         > | enabled     | True                             | >         > | id          | 826876d6d3724018bae6253c7f540cb3 | >         > | is_domain   | False                            | >         > | name        | project1                         | >         > | parent_id   | default                          | >         > | tags        | []                               | >         > +-------------+----------------------------------+ >         > $ openstack project create project2 >         > +-------------+----------------------------------+ >         > | Field       | Value                            | >         > +-------------+----------------------------------+ >         > | description |                                  | >         > | domain_id   | default                          | >         > | enabled     | True                             | >         > | id          | b446b93ac6e24d538c1943acbdd13cb2 | >         > | is_domain   | False                            | >         > | name        | project2                         | >         > | parent_id   | default                          | >         > | tags        | []                               | >         > +-------------+----------------------------------+ >         > $ openstack role add --user user1 --project project1 _member_ >         > $ openstack role add --user user2 --project project2 _member_ >         > $ export OS_PASSWORD=foo >         > $ export OS_USERNAME=user1 >         > $ export OS_PROJECT_NAME=project1 >         > $ openstack image list >         > +--------------------------------------+--------+--------+ >         > | ID                                   | Name   | Status | >         > +--------------------------------------+--------+--------+ >         > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >         > +--------------------------------------+--------+--------+ >         > $ openstack image create --private image1 >         > > +------------------+------------------------------------------------------------------------------+ >         > | Field            | Value >         >                          | >         > > +------------------+------------------------------------------------------------------------------+ >         > | checksum         | None >         >                          | >         > | container_format | bare >         >                          | >         > | created_at       | 2018-10-18T22:17:41Z >         >                          | >         > | disk_format      | raw >         >                          | >         > | file             | >         > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file >         >     | >         > | id               | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 >         >                          | >         > | min_disk         | 0 >         >                          | >         > | min_ram          | 0 >         >                          | >         > | name             | image1 >         >                          | >         > | owner            | 826876d6d3724018bae6253c7f540cb3 >         >                          | >         > | properties       | locations='[]', os_hash_algo='None', >         > os_hash_value='None', os_hidden='False' | >         > | protected        | False >         >                          | >         > | schema           | /v2/schemas/image >         >                          | >         > | size             | None >         >                          | >         > | status           | queued >         >                          | >         > | tags             | >         >                          | >         > | updated_at       | 2018-10-18T22:17:41Z >         >                          | >         > | virtual_size     | None >         >                          | >         > | visibility       | private >         >                          | >         > > +------------------+------------------------------------------------------------------------------+ >         > $ openstack image list >         > +--------------------------------------+--------+--------+ >         > | ID                                   | Name   | Status | >         > +--------------------------------------+--------+--------+ >         > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >         > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | >         > +--------------------------------------+--------+--------+ >         > $ export OS_USERNAME=user2 >         > $ export OS_PROJECT_NAME=project2 >         > $ openstack image list >         > +--------------------------------------+--------+--------+ >         > | ID                                   | Name   | Status | >         > +--------------------------------------+--------+--------+ >         > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >         > +--------------------------------------+--------+--------+ >         > $ export OS_USERNAME=admin >         > $ export OS_PROJECT_NAME=admin >         > $ export OS_PASSWORD=xxx >         > $ openstack image set --public > 6a0c1928-b79c-4dbf-a9c9-305b599056e4 >         > $ export OS_USERNAME=user2 >         > $ export OS_PROJECT_NAME=project2 >         > $ export OS_PASSWORD=foo >         > $ openstack image list >         > +--------------------------------------+--------+--------+ >         > | ID                                   | Name   | Status | >         > +--------------------------------------+--------+--------+ >         > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >         > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | >         > +--------------------------------------+--------+--------+ >         > $ >         > >         > >         > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >         > INTEGRA, INC.] wrote: >         >> openstack user create --domain default --password xxxxxxxx > --project-domain >         >> ndc --project test mike >         >> >         >> >         >> openstack role add --user mike --user-domain default > --project test user >         >> >         >> my admin account is in the NDC domain with a different username. >         >> >         >> >         >> >         >> /etc/glance/policy.json >         >> { >         >> >         >> "context_is_admin":  "role:admin", >         >> "default": "role:admin", >         >> >         >> >         >> >         >> >         >> I'm not terribly familiar with the policies but I feel like > that default >         >> line is making everyone an admin by default? >         >> >         >> >         >> Mike Moore, M.S.S.E. >         >> >         >> Systems Engineer, Goddard Private Cloud >         >> Michael.D.Moore at nasa.gov >         >> >         >> Hydrogen fusion brightens my day. >         >> >         >> >         >> On 10/18/18, 6:25 PM, "iain MacDonnell" > wrote: >         >> >         >> >         >> I suspect that your non-admin user is not really non-admin. > How did you >         >> create it? >         >> >         >> What you have for "context_is_admin" in glance's policy.json ? >         >> >         >>  ~iain >         >> >         >> >         >> On 10/18/2018 03:11 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS >         >> INTEGRA, INC.] wrote: >         >>> I have replicated this unexpected behavior in a Pike test > environment, in >         >>> addition to our Queens environment. >         >>> >         >>> >         >>> >         >>> Mike Moore, M.S.S.E. >         >>> >         >>> Systems Engineer, Goddard Private Cloud >         >>> Michael.D.Moore at nasa.gov >         >>> >         >>> Hydrogen fusion brightens my day. >         >>> >         >>> >         >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane > (GSFC-720.0)[BUSINESS INTEGRA, >         >>> INC.]" wrote: >         >>> >         >>>    Yes. I verified it by creating a non-admin user in a > different tenant. I >         >>>    created a new image, set to private with the project > defined as our admin >         >>>    tenant. >         >>> >         >>>    In the database I can see that the image is 'private' > and the owner is the >         >>>    ID of the admin tenant. >         >>> >         >>>    Mike Moore, M.S.S.E. >         >>> >         >>>    Systems Engineer, Goddard Private Cloud >         >>>    Michael.D.Moore at nasa.gov >         >>> >         >>>    Hydrogen fusion brightens my day. >         >>> >         >>> >         >>>    On 10/18/18, 1:07 AM, "iain MacDonnell" > wrote: >         >>> >         >>> >         >>> >         >>>        On 10/17/2018 12:29 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS >         >>>        INTEGRA, INC.] wrote: >         >>>        > I’m seeing unexpected behavior in our Queens > environment related to >         >>>        > Glance image visibility. Specifically users who, > based on my >         >>>        > understanding of the visibility and ownership > fields, should NOT be able >         >>>        > to see or view the image. >         >>>        > >         >>>        > If I create a new image with openstack image > create and specify –project >         >>>        > and –private a non-admin user in a > different tenant can see and >         >>>        > boot that image. >         >>>        > >         >>>        > That seems to be the opposite of what should > happen. Any ideas? >         >>> >         >>>        Yep, something's not right there. >         >>> >         >>>        Are you sure that the user that can see the image > doesn't have the admin >         >>>        role (for the project in its keystone token) ? >         >>> >         >>>        Did you verify that the image's owner is what you > intended, and that the >         >>>        visibility really is "private" ? >         >>> >         >>>             ~iain >         >>> >         >>>        _______________________________________________ >         >>>        OpenStack-operators mailing list >         >>>        OpenStack-operators at lists.openstack.org >         >>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >         >>> >         >>> >         >>>    _______________________________________________ >         >>>    OpenStack-operators mailing list >         >>>    OpenStack-operators at lists.openstack.org >         >>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >         > >         > _______________________________________________ >         > OpenStack-operators mailing list >         > OpenStack-operators at lists.openstack.org >         > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > >     _______________________________________________ >     OpenStack-operators mailing list >     OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > ------------------------------ > > Message: 14 > Date: Fri, 19 Oct 2018 13:45:03 -0400 > From: Jay Pipes > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. >         1.1 released > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > Please do not use these mailing lists to advertise > closed-source/proprietary software solutions. > > Thank you, > -jay > > On 10/19/2018 05:42 AM, Adrian Andreias wrote: >> Hello, >> >> We've just released Fleio version 1.1. >> >> Fleio is a billing solution and control panel for OpenStack public >> clouds and traditional web hosters. >> >> Fleio software automates the entire process for cloud users. New >> customers can use Fleio to sign up for an account, pay invoices, add >> credit to their account, as well as create and manage cloud resources >> such as virtual machines, storage and networking. >> >> Full feature list: >> https://fleio.com#features > >> >> You can see an online demo: >> https://fleio.com/demo > >> >> And sign-up for a free trial: >> https://fleio.com/signup > >> >> >> >> Cheers! >> >> - Adrian Andreias >> https://fleio.com > >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > > > > ------------------------------ > > Message: 15 > Date: Fri, 19 Oct 2018 20:13:40 +0200 > From: Mohammed Naser > To: jaypipes at gmail.com > Cc: openstack-operators > Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. >         1.1     released > Message-ID: > > > Content-Type: text/plain; charset="UTF-8" > > On Fri, Oct 19, 2018 at 7:45 PM Jay Pipes wrote: >> >> Please do not use these mailing lists to advertise >> closed-source/proprietary software solutions. > > +1 > >> Thank you, >> -jay >> >> On 10/19/2018 05:42 AM, Adrian Andreias wrote: >> > Hello, >> > >> > We've just released Fleio version 1.1. >> > >> > Fleio is a billing solution and control panel for OpenStack public >> > clouds and traditional web hosters. >> > >> > Fleio software automates the entire process for cloud users. New >> > customers can use Fleio to sign up for an account, pay invoices, add >> > credit to their account, as well as create and manage cloud resources >> > such as virtual machines, storage and networking. >> > >> > Full feature list: >> > https://fleio.com#features > >> > >> > You can see an online demo: >> > https://fleio.com/demo > >> > >> > And sign-up for a free trial: >> > https://fleio.com/signup > >> > >> > >> > >> > Cheers! >> > >> > - Adrian Andreias >> > https://fleio.com > >> > >> > >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > > > > ------------------------------ > > Message: 16 > Date: Fri, 19 Oct 2018 14:39:29 -0400 > From: Erik McCormick > To: openstack-operators > Subject: [Openstack-operators] [Octavia] SSL errors polling amphorae >         and     missing tenant network interface > Message-ID: > > > Content-Type: text/plain; charset="UTF-8" > > I've been wrestling with getting Octavia up and running and have > become stuck on two issues. I'm hoping someone has run into these > before. My google foo has come up empty. > > Issue 1: > When the Octavia controller tries to poll the amphora instance, it > tries repeatedly and eventually fails. The error on the controller > side is: > > 2018-10-19 14:17:39.181 26 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > retries (currently set to 300) exhausted.  The amphora is unavailable. > Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)) > > On the amphora side I see: > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > failure (_ssl.c:1754) > > I've generated certificates both with the script in the Octavia git > repo, and with the Openstack Ansible playbook. I can see that they are > present in /etc/octavia/certs. > > I'm using the Kolla (Queens) containers for the control plane so I'm > sure I've satisfied all the python library constraints. > > Issue 2: > I"m not sure how it gets configured, but the tenant network interface > (ens6) never comes up. I can spawn other instances on that network > with no issue, and I can see that Neutron has the port attached to the > instance. However, in the instance this is all I get: > > ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1 >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >     inet 127.0.0.1/8 scope host lo >        valid_lft forever preferred_lft forever >     inet6 ::1/128 scope host >        valid_lft forever preferred_lft forever > 2: ens3: mtu 9000 qdisc pfifo_fast > state UP group default qlen 1000 >     link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff >     inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 >        valid_lft forever preferred_lft forever >     inet6 fe80::f816:3eff:fe30:c460/64 scope link >        valid_lft forever preferred_lft forever > 3: ens6: mtu 1500 qdisc noop state DOWN group > default qlen 1000 >     link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > There's no evidence of the interface anywhere else including udev rules. > > Any help with either or both issues would be greatly appreciated. > > Cheers, > Erik > > > > ------------------------------ > > Message: 17 > Date: Sat, 20 Oct 2018 01:47:42 +0200 > From: Gaël THEROND > To: Erik McCormick > Cc: openstack-operators > Subject: Re: [Openstack-operators] [Octavia] SSL errors polling >         amphorae and missing tenant network interface > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > Hi eric! > > Glad I’m not the only one having this issue with the ssl communication > between the amphora and the CP. > > Even if I don’t yet get a clear answer regarding that issue, I think your > second issue is not an issue as the interface is mounted on a namespace and > so you’ll need to list all nic even those from namespace. > > Use an ip netns ls to get the namespace. > > Hope it will help. > > Le ven. 19 oct. 2018 à 20:40, Erik McCormick a > écrit : > >> I've been wrestling with getting Octavia up and running and have >> become stuck on two issues. I'm hoping someone has run into these >> before. My google foo has come up empty. >> >> Issue 1: >> When the Octavia controller tries to poll the amphora instance, it >> tries repeatedly and eventually fails. The error on the controller >> side is: >> >> 2018-10-19 14:17:39.181 26 ERROR >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection >> retries (currently set to 300) exhausted.  The amphora is unavailable. >> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries >> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by >> SSLError(SSLError("bad handshake: Error([('rsa routines', >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >> 'tls_process_server_certificate', 'certificate verify >> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', >> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 >> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >> 'tls_process_server_certificate', 'certificate verify >> failed')],)",),)) >> >> On the amphora side I see: >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from >> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake >> failure (_ssl.c:1754) >> >> I've generated certificates both with the script in the Octavia git >> repo, and with the Openstack Ansible playbook. I can see that they are >> present in /etc/octavia/certs. >> >> I'm using the Kolla (Queens) containers for the control plane so I'm >> sure I've satisfied all the python library constraints. >> >> Issue 2: >> I"m not sure how it gets configured, but the tenant network interface >> (ens6) never comes up. I can spawn other instances on that network >> with no issue, and I can see that Neutron has the port attached to the >> instance. However, in the instance this is all I get: >> >> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> group default qlen 1 >>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >>     inet 127.0.0.1/8 scope host lo >>        valid_lft forever preferred_lft forever >>     inet6 ::1/128 scope host >>        valid_lft forever preferred_lft forever >> 2: ens3: mtu 9000 qdisc pfifo_fast >> state UP group default qlen 1000 >>     link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff >>     inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 >>        valid_lft forever preferred_lft forever >>     inet6 fe80::f816:3eff:fe30:c460/64 scope link >>        valid_lft forever preferred_lft forever >> 3: ens6: mtu 1500 qdisc noop state DOWN group >> default qlen 1000 >>     link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff >> >> There's no evidence of the interface anywhere else including udev rules. >> >> Any help with either or both issues would be greatly appreciated. >> >> Cheers, >> Erik >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ------------------------------ > > End of OpenStack-operators Digest, Vol 96, Issue 7 > ************************************************** > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > From michael.d.moore at nasa.gov Wed Oct 24 03:35:14 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Wed, 24 Oct 2018 03:35:14 +0000 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: References: Message-ID: <402CFD9D-15F6-4C23-89FB-895FE250F576@nasa.gov> This is interesting. The "roles" field shows "user" properly for the non-admin user, and "admin" for the admin users. Nothing in our output for `openstack --debug token issue` shows "is_admin_project" My colleague did find logs in Keystone are showing is_admin_project: True on his non-admin user that is only a "user" according to the roles field in a token issue test. We're wondering if it's not a glance issue but a keystone issue/misconfiguration Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. On 10/23/18, 7:50 PM, "iain MacDonnell" wrote: It (still) seems like there's something funky about admin/non-admin in your case. You could try "openstack --debug token issue" (in the admin and non-admin cases), and examine the token dict that gets output. Look for the "roles" list and "is_admin_project". ~iain On 10/23/2018 03:21 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > We have submitted a bug for this > > https://bugs.launchpad.net/glance/+bug/1799588 > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > ** > > Hydrogen fusion brightens my day. > > *From: *"Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > *Date: *Saturday, October 20, 2018 at 7:22 PM > *To: *Logan Hicks , > "openstack-operators at lists.openstack.org" > > *Subject: *Re: [Openstack-operators] OpenStack-operators Digest, Vol 96, > Issue 7 > > The images exist and are bootable. I'm going to trace through the actual > code for glance API. Any suggestions on where the show/hide logic is > when it filters responses? I'm new to digging through OpenStack code. > > ------------------------------------------------------------------------ > > *From:*Logan Hicks [logan.hicks at live.com] > *Sent:* Friday, October 19, 2018 8:00 PM > *To:* openstack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] OpenStack-operators Digest, Vol 96, > Issue 7 > > Re: Glance Image Visibility Issue? - Non admin users can see > private images from other tenants (Chris Apsey) > > I noticed that the image says queued. If Im not mistaken, an image cant > have permissions applied until after the image is created, which might > explain the issue hes seeing. > > The object doesnt exist until its made by openstack. > > Id check to see if something is holding up images being made. Id start > with glance. > > Respectfully, > > Logan Hicks > > -------- Original message -------- > > From: openstack-operators-request at lists.openstack.org > > Date: 10/19/18 7:49 PM (GMT-05:00) > > To: openstack-operators at lists.openstack.org > > Subject: OpenStack-operators Digest, Vol 96, Issue 7 > > Send OpenStack-operators mailing list submissions to > openstack-operators at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > or, via email, send a message with subject or body 'help' to > openstack-operators-request at lists.openstack.org > > You can reach the person managing the list at > openstack-operators-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of OpenStack-operators digest..." > > > Today's Topics: > > 1. [nova] Removing the CachingScheduler (Matt Riedemann) > 2. Re: Glance Image Visibility Issue? - Non admin users can see > private images from other tenants > (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) > 3. Re: Glance Image Visibility Issue? - Non admin users can see > private images from other tenants (Chris Apsey) > 4. Re: Glance Image Visibility Issue? - Non admin users can see > private images from other tenants (iain MacDonnell) > 5. Re: Glance Image Visibility Issue? - Non admin users can see > private images from other tenants > (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) > 6. Re: Glance Image Visibility Issue? - Non admin users can see > private images from other tenants (iain MacDonnell) > 7. Re: Glance Image Visibility Issue? - Non admin users can see > private images from other tenants (Chris Apsey) > 8. osops-tools-monitoring Dependency problems (Tomáš Vondra) > 9. [heat][cinder] How to create stack snapshot including volumes > (Christian Zunker) > 10. Fleio - OpenStack billing - ver. 1.1 released (Adrian Andreias) > 11. Re: [Openstack-sigs] [all] Naming the T release of OpenStack > (Tony Breeds) > 12. Re: Glance Image Visibility Issue? - Non admin users can see > private images from other tenants > (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) > 13. Re: Glance Image Visibility Issue? - Non admin users can see > private images from other tenants > (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) > 14. Re: Fleio - OpenStack billing - ver. 1.1 released (Jay Pipes) > 15. Re: Fleio - OpenStack billing - ver. 1.1 released (Mohammed Naser) > 16. [Octavia] SSL errors polling amphorae and missing tenant > network interface (Erik McCormick) > 17. Re: [Octavia] SSL errors polling amphorae and missing tenant > network interface (Gaël THEROND) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 18 Oct 2018 17:07:00 -0500 > From: Matt Riedemann > To: "openstack-operators at lists.openstack.org" > > Subject: [Openstack-operators] [nova] Removing the CachingScheduler > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > It's been deprecated since Pike, and the time has come to remove it [1]. > > mgagne has been the most vocal CachingScheduler operator I know and he > has tested out the "nova-manage placement heal_allocations" CLI, added > in Rocky, and said it will work for migrating his deployment from the > CachingScheduler to the FilterScheduler + Placement. > > If you are using the CachingScheduler and have a problem with its > removal, now is the time to speak up or forever hold your peace. > > [1] https://review.openstack.org/#/c/611723/1 > > > -- > > Thanks, > > Matt > > > > ------------------------------ > > Message: 2 > Date: Thu, 18 Oct 2018 22:11:40 +0000 > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > To: iain MacDonnell , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > Non admin users can see private images from other tenants > Message-ID: > Content-Type: text/plain; charset="utf-8" > > I have replicated this unexpected behavior in a Pike test environment, > in addition to our Queens environment. > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.]" wrote: > > Yes. I verified it by creating a non-admin user in a different > tenant. I created a new image, set to private with the project defined > as our admin tenant. > > In the database I can see that the image is 'private' and the owner > is the ID of the admin tenant. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 1:07 AM, "iain MacDonnell" > wrote: > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I’m seeing unexpected behavior in our Queens environment > related to > > Glance image visibility. Specifically users who, based on my > > understanding of the visibility and ownership fields, should > NOT be able > > to see or view the image. > > > > If I create a new image with openstack image create and > specify –project > > and –private a non-admin user in a different tenant > can see and > > boot that image. > > > > That seems to be the opposite of what should happen. Any ideas? > > Yep, something's not right there. > > Are you sure that the user that can see the image doesn't have > the admin > role (for the project in its keystone token) ? > > Did you verify that the image's owner is what you intended, and > that the > visibility really is "private" ? > > ~iain > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > ------------------------------ > > Message: 3 > Date: Thu, 18 Oct 2018 18:23:35 -0400 > From: Chris Apsey > To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > , iain MacDonnell > , > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > Non admin users can see private images from other tenants > Message-ID: > <1668946da70.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> > Content-Type: text/plain; format=flowed; charset="UTF-8" > > Do you have a liberal/custom policy.json that perhaps is causing unexpected > behavior? Can't seem to reproduce this. > > On October 18, 2018 18:13:22 "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.]" wrote: > >> I have replicated this unexpected behavior in a Pike test environment, in >> addition to our Queens environment. >> >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >> INC.]" wrote: >> >> Yes. I verified it by creating a non-admin user in a different tenant. I >> created a new image, set to private with the project defined as our admin >> tenant. >> >> In the database I can see that the image is 'private' and the owner is the >> ID of the admin tenant. >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >> >> >> >> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> I’m seeing unexpected behavior in our Queens environment related to >>> Glance image visibility. Specifically users who, based on my >>> understanding of the visibility and ownership fields, should NOT be able >>> to see or view the image. >>> >>> If I create a new image with openstack image create and specify –project >>> and –private a non-admin user in a different tenant can see and >>> boot that image. >>> >>> That seems to be the opposite of what should happen. Any ideas? >> >> Yep, something's not right there. >> >> Are you sure that the user that can see the image doesn't have the admin >> role (for the project in its keystone token) ? >> >> Did you verify that the image's owner is what you intended, and that the >> visibility really is "private" ? >> >> ~iain >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > > > ------------------------------ > > Message: 4 > Date: Thu, 18 Oct 2018 15:25:22 -0700 > From: iain MacDonnell > To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > Non admin users can see private images from other tenants > Message-ID: <11e3f7a6-875e-4b6c-259a-147188a860e1 at oracle.com> > Content-Type: text/plain; charset=utf-8; format=flowed > > > I suspect that your non-admin user is not really non-admin. How did you > create it? > > What you have for "context_is_admin" in glance's policy.json ? > > ~iain > > > On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. >> >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: >> >> Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. >> >> In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >> >> >> >> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >> > I’m seeing unexpected behavior in our Queens environment related to >> > Glance image visibility. Specifically users who, based on my >> > understanding of the visibility and ownership fields, should NOT be able >> > to see or view the image. >> > >> > If I create a new image with openstack image create and specify –project >> > and –private a non-admin user in a different tenant can see and >> > boot that image. >> > >> > That seems to be the opposite of what should happen. Any ideas? >> >> Yep, something's not right there. >> >> Are you sure that the user that can see the image doesn't have the admin >> role (for the project in its keystone token) ? >> >> Did you verify that the image's owner is what you intended, and that the >> visibility really is "private" ? >> >> ~iain >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >> >> > > > > ------------------------------ > > Message: 5 > Date: Thu, 18 Oct 2018 22:32:42 +0000 > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > To: iain MacDonnell , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > Non admin users can see private images from other tenants > Message-ID: <44085CC4-899C-49B2-9934-0800F6650B0B at nasa.gov> > Content-Type: text/plain; charset="utf-8" > > openstack user create --domain default --password xxxxxxxx > --project-domain ndc --project test mike > > > openstack role add --user mike --user-domain default --project test user > > my admin account is in the NDC domain with a different username. > > > > /etc/glance/policy.json > { > > "context_is_admin": "role:admin", > "default": "role:admin", > > > > > I'm not terribly familiar with the policies but I feel like that default > line is making everyone an admin by default? > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: > > > I suspect that your non-admin user is not really non-admin. How did > you > create it? > > What you have for "context_is_admin" in glance's policy.json ? > > ~iain > > > On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > I have replicated this unexpected behavior in a Pike test > environment, in addition to our Queens environment. > > > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.]" wrote: > > > > Yes. I verified it by creating a non-admin user in a > different tenant. I created a new image, set to private with the project > defined as our admin tenant. > > > > In the database I can see that the image is 'private' and > the owner is the ID of the admin tenant. > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 1:07 AM, "iain MacDonnell" > wrote: > > > > > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > > > I’m seeing unexpected behavior in our Queens > environment related to > > > Glance image visibility. Specifically users who, based > on my > > > understanding of the visibility and ownership fields, > should NOT be able > > > to see or view the image. > > > > > > If I create a new image with openstack image create > and specify –project > > > and –private a non-admin user in a different > tenant can see and > > > boot that image. > > > > > > That seems to be the opposite of what should happen. > Any ideas? > > > > Yep, something's not right there. > > > > Are you sure that the user that can see the image > doesn't have the admin > > role (for the project in its keystone token) ? > > > > Did you verify that the image's owner is what you > intended, and that the > > visibility really is "private" ? > > > > ~iain > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > > > ------------------------------ > > Message: 6 > Date: Thu, 18 Oct 2018 15:48:27 -0700 > From: iain MacDonnell > To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > Non admin users can see private images from other tenants > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > > That all looks fine. > > I believe that the "default" policy applies in place of any that's not > explicitly specified - i.e. "if there's no matching policy below, you > need to have the admin role to be able to do it". I do have that line in > my policy.json, and I cannot reproduce your problem (see below). > > I'm not using domains (other than "default"). I wonder if that's a factor... > > ~iain > > > $ openstack user create --password foo user1 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | d18c0031ec56430499a2d690cb1f125c | > | name | user1 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack user create --password foo user2 > +---------------------+----------------------------------+ > | Field | Value | > +---------------------+----------------------------------+ > | domain_id | default | > | enabled | True | > | id | be9f1061a5104abd834eabe98dff055d | > | name | user2 | > | options | {} | > | password_expires_at | None | > +---------------------+----------------------------------+ > $ openstack project create project1 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | 826876d6d3724018bae6253c7f540cb3 | > | is_domain | False | > | name | project1 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack project create project2 > +-------------+----------------------------------+ > | Field | Value | > +-------------+----------------------------------+ > | description | | > | domain_id | default | > | enabled | True | > | id | b446b93ac6e24d538c1943acbdd13cb2 | > | is_domain | False | > | name | project2 | > | parent_id | default | > | tags | [] | > +-------------+----------------------------------+ > $ openstack role add --user user1 --project project1 _member_ > $ openstack role add --user user2 --project project2 _member_ > $ export OS_PASSWORD=foo > $ export OS_USERNAME=user1 > $ export OS_PROJECT_NAME=project1 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ openstack image create --private image1 > +------------------+------------------------------------------------------------------------------+ > | Field | Value > | > +------------------+------------------------------------------------------------------------------+ > | checksum | None > | > | container_format | bare > | > | created_at | 2018-10-18T22:17:41Z > | > | disk_format | raw > | > | file | > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > | > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > | > | min_disk | 0 > | > | min_ram | 0 > | > | name | image1 > | > | owner | 826876d6d3724018bae6253c7f540cb3 > | > | properties | locations='[]', os_hash_algo='None', > os_hash_value='None', os_hidden='False' | > | protected | False > | > | schema | /v2/schemas/image > | > | size | None > | > | status | queued > | > | tags | > | > | updated_at | 2018-10-18T22:17:41Z > | > | virtual_size | None > | > | visibility | private > | > +------------------+------------------------------------------------------------------------------+ > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > +--------------------------------------+--------+--------+ > $ export OS_USERNAME=admin > $ export OS_PROJECT_NAME=admin > $ export OS_PASSWORD=xxx > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > $ export OS_USERNAME=user2 > $ export OS_PROJECT_NAME=project2 > $ export OS_PASSWORD=foo > $ openstack image list > +--------------------------------------+--------+--------+ > | ID | Name | Status | > +--------------------------------------+--------+--------+ > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > +--------------------------------------+--------+--------+ > $ > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: >> openstack user create --domain default --password xxxxxxxx --project-domain ndc --project test mike >> >> >> openstack role add --user mike --user-domain default --project test user >> >> my admin account is in the NDC domain with a different username. >> >> >> >> /etc/glance/policy.json >> { >> >> "context_is_admin": "role:admin", >> "default": "role:admin", >> >> >> >> >> I'm not terribly familiar with the policies but I feel like that default line is making everyone an admin by default? >> >> >> Mike Moore, M.S.S.E. >> >> Systems Engineer, Goddard Private Cloud >> Michael.D.Moore at nasa.gov >> >> Hydrogen fusion brightens my day. >> >> >> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >> >> >> I suspect that your non-admin user is not really non-admin. How did you >> create it? >> >> What you have for "context_is_admin" in glance's policy.json ? >> >> ~iain >> >> >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >> > I have replicated this unexpected behavior in a Pike test environment, in addition to our Queens environment. >> > >> > >> > >> > Mike Moore, M.S.S.E. >> > >> > Systems Engineer, Goddard Private Cloud >> > Michael.D.Moore at nasa.gov >> > >> > Hydrogen fusion brightens my day. >> > >> > >> > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" wrote: >> > >> > Yes. I verified it by creating a non-admin user in a different tenant. I created a new image, set to private with the project defined as our admin tenant. >> > >> > In the database I can see that the image is 'private' and the owner is the ID of the admin tenant. >> > >> > Mike Moore, M.S.S.E. >> > >> > Systems Engineer, Goddard Private Cloud >> > Michael.D.Moore at nasa.gov >> > >> > Hydrogen fusion brightens my day. >> > >> > >> > On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >> > >> > >> > >> > On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> > INTEGRA, INC.] wrote: >> > > I’m seeing unexpected behavior in our Queens environment related to >> > > Glance image visibility. Specifically users who, based on my >> > > understanding of the visibility and ownership fields, should NOT be able >> > > to see or view the image. >> > > >> > > If I create a new image with openstack image create and specify –project >> > > and –private a non-admin user in a different tenant can see and >> > > boot that image. >> > > >> > > That seems to be the opposite of what should happen. Any ideas? >> > >> > Yep, something's not right there. >> > >> > Are you sure that the user that can see the image doesn't have the admin >> > role (for the project in its keystone token) ? >> > >> > Did you verify that the image's owner is what you intended, and that the >> > visibility really is "private" ? >> > >> > ~iain >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >> > >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >> > >> > >> >> > > > > ------------------------------ > > Message: 7 > Date: Thu, 18 Oct 2018 19:23:42 -0400 > From: Chris Apsey > To: iain MacDonnell , "Moore, Michael Dane > (GSFC-720.0)[BUSINESS INTEGRA, INC.]" , > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > Non admin users can see private images from other tenants > Message-ID: > <166897de830.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> > Content-Type: text/plain; format=flowed; charset="UTF-8" > > We are using multiple keystone domains - still can't reproduce this. > > Do you happen to have a customized keystone policy.json? > > Worst case, I would launch a devstack of your targeted release. If you > can't reproduce the issue there, you would at least know its caused by a > nonstandard config rather than a bug (or at least not a bug that's present > when using a default config) > > On October 18, 2018 18:50:12 iain MacDonnell > wrote: > >> That all looks fine. >> >> I believe that the "default" policy applies in place of any that's not >> explicitly specified - i.e. "if there's no matching policy below, you >> need to have the admin role to be able to do it". I do have that line in >> my policy.json, and I cannot reproduce your problem (see below). >> >> I'm not using domains (other than "default"). I wonder if that's a factor... >> >> ~iain >> >> >> $ openstack user create --password foo user1 >> +---------------------+----------------------------------+ >> | Field | Value | >> +---------------------+----------------------------------+ >> | domain_id | default | >> | enabled | True | >> | id | d18c0031ec56430499a2d690cb1f125c | >> | name | user1 | >> | options | {} | >> | password_expires_at | None | >> +---------------------+----------------------------------+ >> $ openstack user create --password foo user2 >> +---------------------+----------------------------------+ >> | Field | Value | >> +---------------------+----------------------------------+ >> | domain_id | default | >> | enabled | True | >> | id | be9f1061a5104abd834eabe98dff055d | >> | name | user2 | >> | options | {} | >> | password_expires_at | None | >> +---------------------+----------------------------------+ >> $ openstack project create project1 >> +-------------+----------------------------------+ >> | Field | Value | >> +-------------+----------------------------------+ >> | description | | >> | domain_id | default | >> | enabled | True | >> | id | 826876d6d3724018bae6253c7f540cb3 | >> | is_domain | False | >> | name | project1 | >> | parent_id | default | >> | tags | [] | >> +-------------+----------------------------------+ >> $ openstack project create project2 >> +-------------+----------------------------------+ >> | Field | Value | >> +-------------+----------------------------------+ >> | description | | >> | domain_id | default | >> | enabled | True | >> | id | b446b93ac6e24d538c1943acbdd13cb2 | >> | is_domain | False | >> | name | project2 | >> | parent_id | default | >> | tags | [] | >> +-------------+----------------------------------+ >> $ openstack role add --user user1 --project project1 _member_ >> $ openstack role add --user user2 --project project2 _member_ >> $ export OS_PASSWORD=foo >> $ export OS_USERNAME=user1 >> $ export OS_PROJECT_NAME=project1 >> $ openstack image list >> +--------------------------------------+--------+--------+ >> | ID | Name | Status | >> +--------------------------------------+--------+--------+ >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >> +--------------------------------------+--------+--------+ >> $ openstack image create --private image1 >> +------------------+------------------------------------------------------------------------------+ >> | Field | Value >> | >> +------------------+------------------------------------------------------------------------------+ >> | checksum | None >> | >> | container_format | bare >> | >> | created_at | 2018-10-18T22:17:41Z >> | >> | disk_format | raw >> | >> | file | >> /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file >> | >> | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 >> | >> | min_disk | 0 >> | >> | min_ram | 0 >> | >> | name | image1 >> | >> | owner | 826876d6d3724018bae6253c7f540cb3 >> | >> | properties | locations='[]', os_hash_algo='None', >> os_hash_value='None', os_hidden='False' | >> | protected | False >> | >> | schema | /v2/schemas/image >> | >> | size | None >> | >> | status | queued >> | >> | tags | >> | >> | updated_at | 2018-10-18T22:17:41Z >> | >> | virtual_size | None >> | >> | visibility | private >> | >> +------------------+------------------------------------------------------------------------------+ >> $ openstack image list >> +--------------------------------------+--------+--------+ >> | ID | Name | Status | >> +--------------------------------------+--------+--------+ >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >> | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | >> +--------------------------------------+--------+--------+ >> $ export OS_USERNAME=user2 >> $ export OS_PROJECT_NAME=project2 >> $ openstack image list >> +--------------------------------------+--------+--------+ >> | ID | Name | Status | >> +--------------------------------------+--------+--------+ >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >> +--------------------------------------+--------+--------+ >> $ export OS_USERNAME=admin >> $ export OS_PROJECT_NAME=admin >> $ export OS_PASSWORD=xxx >> $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 >> $ export OS_USERNAME=user2 >> $ export OS_PROJECT_NAME=project2 >> $ export OS_PASSWORD=foo >> $ openstack image list >> +--------------------------------------+--------+--------+ >> | ID | Name | Status | >> +--------------------------------------+--------+--------+ >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | >> | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | >> +--------------------------------------+--------+--------+ >> $ >> >> >> On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >> INTEGRA, INC.] wrote: >>> openstack user create --domain default --password xxxxxxxx --project-domain >>> ndc --project test mike >>> >>> >>> openstack role add --user mike --user-domain default --project test user >>> >>> my admin account is in the NDC domain with a different username. >>> >>> >>> >>> /etc/glance/policy.json >>> { >>> >>> "context_is_admin": "role:admin", >>> "default": "role:admin", >>> >>> >>> >>> >>> I'm not terribly familiar with the policies but I feel like that default >>> line is making everyone an admin by default? >>> >>> >>> Mike Moore, M.S.S.E. >>> >>> Systems Engineer, Goddard Private Cloud >>> Michael.D.Moore at nasa.gov >>> >>> Hydrogen fusion brightens my day. >>> >>> >>> On 10/18/18, 6:25 PM, "iain MacDonnell" wrote: >>> >>> >>> I suspect that your non-admin user is not really non-admin. How did you >>> create it? >>> >>> What you have for "context_is_admin" in glance's policy.json ? >>> >>> ~iain >>> >>> >>> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>> INTEGRA, INC.] wrote: >>>> I have replicated this unexpected behavior in a Pike test environment, in >>>> addition to our Queens environment. >>>> >>>> >>>> >>>> Mike Moore, M.S.S.E. >>>> >>>> Systems Engineer, Goddard Private Cloud >>>> Michael.D.Moore at nasa.gov >>>> >>>> Hydrogen fusion brightens my day. >>>> >>>> >>>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, >>>> INC.]" wrote: >>>> >>>> Yes. I verified it by creating a non-admin user in a different tenant. I >>>> created a new image, set to private with the project defined as our admin >>>> tenant. >>>> >>>> In the database I can see that the image is 'private' and the owner is the >>>> ID of the admin tenant. >>>> >>>> Mike Moore, M.S.S.E. >>>> >>>> Systems Engineer, Goddard Private Cloud >>>> Michael.D.Moore at nasa.gov >>>> >>>> Hydrogen fusion brightens my day. >>>> >>>> >>>> On 10/18/18, 1:07 AM, "iain MacDonnell" wrote: >>>> >>>> >>>> >>>> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS >>>> INTEGRA, INC.] wrote: >>>> > I’m seeing unexpected behavior in our Queens environment related to >>>> > Glance image visibility. Specifically users who, based on my >>>> > understanding of the visibility and ownership fields, should NOT be able >>>> > to see or view the image. >>>> > >>>> > If I create a new image with openstack image create and specify –project >>>> > and –private a non-admin user in a different tenant can see and >>>> > boot that image. >>>> > >>>> > That seems to be the opposite of what should happen. Any ideas? >>>> >>>> Yep, something's not right there. >>>> >>>> Are you sure that the user that can see the image doesn't have the admin >>>> role (for the project in its keystone token) ? >>>> >>>> Did you verify that the image's owner is what you intended, and that the >>>> visibility really is "private" ? >>>> >>>> ~iain >>>> >>>> _______________________________________________ >>>> OpenStack-operators mailing list >>>> OpenStack-operators at lists.openstack.org >>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-operators mailing list >>>> OpenStack-operators at lists.openstack.org >>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > > > ------------------------------ > > Message: 8 > Date: Fri, 19 Oct 2018 10:58:30 +0200 > From: Tomáš Vondra > To: > Subject: [Openstack-operators] osops-tools-monitoring Dependency > problems > Message-ID: <049e01d46789$e8bf5220$ba3df660$@homeatcloud.cz> > Content-Type: text/plain; charset="iso-8859-2" > > Hi! > I'm a long time user of monitoring-for-openstack, also known as oschecks. > Concretely, I used a version from 2015 with OpenStack python client > libraries from Kilo. Now I have upgraded them to Mitaka and it got broken. > Even the latest oschecks don't work. I didn't quite expect that, given that > there are several commits from this year e.g. by Nagasai Vinaykumar > Kapalavai and paramite. Can one of them or some other user step up and say > what version of OpenStack clients is oschecks working with? Ideally, write > it down in requirements.txt so that it will be reproducible? Also, some > documentation of what is the minimal set of parameters would also come in > handy. > Thanks a lot, Tomas from Homeatcloud > > The error messages are as absurd as: > oschecks-check_glance_api --os_auth_url='http://10.1.101.30:5000/v2.0 > ' > --os_username=monitoring --os_password=XXX --os_tenant_name=monitoring > > CRITICAL: Traceback (most recent call last): > File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 121, in > safe_run > method() > File "/usr/lib/python2.7/dist-packages/oschecks/glance.py", line 29, in > _check_glance_api > glance = utils.Glance() > File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 177, in > __init__ > self.glance.parser = self.glance.get_base_parser(sys.argv) > TypeError: get_base_parser() takes exactly 1 argument (2 given) > > (I can see 4 parameters on the command line.) > > > > > ------------------------------ > > Message: 9 > Date: Fri, 19 Oct 2018 11:21:25 +0200 > From: Christian Zunker > To: openstack-operators > Subject: [Openstack-operators] [heat][cinder] How to create stack > snapshot including volumes > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > Hi List, > > I'd like to take snapshots of heat stacks including the volumes. >>From what I found until now, this should be possible. You just have to > configure some parts of OpenStack. > > I enabled cinder-backup with ceph backend. Backups from volumes are working. > I configured heat to include the option backups_enabled = True. > > When I use openstack stack snapshot create, I get a snapshot but no backups > of my volumes. I don't get any error messages in heat. Debug logging didn't > help either. > > OpenStack version is Pike on Ubuntu installed with openstack-ansible. > heat version is 9.0.3. So this should also include this bugfix: > https://bugs.launchpad.net/heat/+bug/1687006 > > > Is anybody using this feature? What am I missing? > > Best regards > Christian > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > > ------------------------------ > > Message: 10 > Date: Fri, 19 Oct 2018 12:42:00 +0300 > From: Adrian Andreias > To: openstack-operators at lists.openstack.org > Subject: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 > released > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > Hello, > > We've just released Fleio version 1.1. > > Fleio is a billing solution and control panel for OpenStack public clouds > and traditional web hosters. > > Fleio software automates the entire process for cloud users. New customers > can use Fleio to sign up for an account, pay invoices, add credit to their > account, as well as create and manage cloud resources such as virtual > machines, storage and networking. > > Full feature list: > https://fleio.com#features > > > You can see an online demo: > https://fleio.com/demo > > > And sign-up for a free trial: > https://fleio.com/signup > > > > > Cheers! > > - Adrian Andreias > https://fleio.com > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > > ------------------------------ > > Message: 11 > Date: Fri, 19 Oct 2018 20:54:29 +1100 > From: Tony Breeds > To: OpenStack Development , > OpenStack SIGs , OpenStack > Operators > Subject: Re: [Openstack-operators] [Openstack-sigs] [all] Naming the T > release of OpenStack > Message-ID: <20181019095428.GA9399 at thor.bakeyournoodle.com> > Content-Type: text/plain; charset="utf-8" > > On Thu, Oct 18, 2018 at 05:35:39PM +1100, Tony Breeds wrote: >> Hello all, >> As per [1] the nomination period for names for the T release have >> now closed (actually 3 days ago sorry). The nominated names and any >> qualifying remarks can be seen at2]. >> >> Proposed Names >> * Tarryall >> * Teakettle >> * Teller >> * Telluride >> * Thomas >> * Thornton >> * Tiger >> * Tincup >> * Timnath >> * Timber >> * Tiny Town >> * Torreys >> * Trail >> * Trinidad >> * Treasure >> * Troublesome >> * Trussville >> * Turret >> * Tyrone >> >> Proposed Names that do not meet the criteria >> * Train > > I have re-worked my openstack/governance change[1] to ask the TC to accept > adding Train to the poll as (partially) described in [2]. > > I present the names above to the community and Foundation marketing team > for consideration. The list above does contain Train, clearly if the TC > do not approve [1] Train will not be included in the poll when created. > > I apologise for any offence or slight caused by my previous email in > this thread. It was well intentioned albeit, with hindsight, poorly > thought through. > > Yours Tony. > > [1] https://review.openstack.org/#/c/611511/ > > [2] > https://governance.openstack.org/tc/reference/release-naming.html#release-name-criteria > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 488 bytes > Desc: not available > URL: > > > > ------------------------------ > > Message: 12 > Date: Fri, 19 Oct 2018 16:33:17 +0000 > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > To: Chris Apsey , iain MacDonnell > , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > Non admin users can see private images from other tenants > Message-ID: <4704898B-D193-4540-B106-BF38ACAB68E2 at nasa.gov> > Content-Type: text/plain; charset="utf-8" > > Our NDC domain is LDAP backed. Default is not. > > Our keystone policy.json file is empty {} > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 7:24 PM, "Chris Apsey" wrote: > > We are using multiple keystone domains - still can't reproduce this. > > Do you happen to have a customized keystone policy.json? > > Worst case, I would launch a devstack of your targeted release. If > you > can't reproduce the issue there, you would at least know its caused > by a > nonstandard config rather than a bug (or at least not a bug that's > present > when using a default config) > > On October 18, 2018 18:50:12 iain MacDonnell > > wrote: > > > That all looks fine. > > > > I believe that the "default" policy applies in place of any > that's not > > explicitly specified - i.e. "if there's no matching policy below, you > > need to have the admin role to be able to do it". I do have that > line in > > my policy.json, and I cannot reproduce your problem (see below). > > > > I'm not using domains (other than "default"). I wonder if that's > a factor... > > > > ~iain > > > > > > $ openstack user create --password foo user1 > > +---------------------+----------------------------------+ > > | Field | Value | > > +---------------------+----------------------------------+ > > | domain_id | default | > > | enabled | True | > > | id | d18c0031ec56430499a2d690cb1f125c | > > | name | user1 | > > | options | {} | > > | password_expires_at | None | > > +---------------------+----------------------------------+ > > $ openstack user create --password foo user2 > > +---------------------+----------------------------------+ > > | Field | Value | > > +---------------------+----------------------------------+ > > | domain_id | default | > > | enabled | True | > > | id | be9f1061a5104abd834eabe98dff055d | > > | name | user2 | > > | options | {} | > > | password_expires_at | None | > > +---------------------+----------------------------------+ > > $ openstack project create project1 > > +-------------+----------------------------------+ > > | Field | Value | > > +-------------+----------------------------------+ > > | description | | > > | domain_id | default | > > | enabled | True | > > | id | 826876d6d3724018bae6253c7f540cb3 | > > | is_domain | False | > > | name | project1 | > > | parent_id | default | > > | tags | [] | > > +-------------+----------------------------------+ > > $ openstack project create project2 > > +-------------+----------------------------------+ > > | Field | Value | > > +-------------+----------------------------------+ > > | description | | > > | domain_id | default | > > | enabled | True | > > | id | b446b93ac6e24d538c1943acbdd13cb2 | > > | is_domain | False | > > | name | project2 | > > | parent_id | default | > > | tags | [] | > > +-------------+----------------------------------+ > > $ openstack role add --user user1 --project project1 _member_ > > $ openstack role add --user user2 --project project2 _member_ > > $ export OS_PASSWORD=foo > > $ export OS_USERNAME=user1 > > $ export OS_PROJECT_NAME=project1 > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > +--------------------------------------+--------+--------+ > > $ openstack image create --private image1 > > > +------------------+------------------------------------------------------------------------------+ > > | Field | Value > > | > > > +------------------+------------------------------------------------------------------------------+ > > | checksum | None > > | > > | container_format | bare > > | > > | created_at | 2018-10-18T22:17:41Z > > | > > | disk_format | raw > > | > > | file | > > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > > | > > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > > | > > | min_disk | 0 > > | > > | min_ram | 0 > > | > > | name | image1 > > | > > | owner | 826876d6d3724018bae6253c7f540cb3 > > | > > | properties | locations='[]', os_hash_algo='None', > > os_hash_value='None', os_hidden='False' | > > | protected | False > > | > > | schema | /v2/schemas/image > > | > > | size | None > > | > > | status | queued > > | > > | tags | > > | > > | updated_at | 2018-10-18T22:17:41Z > > | > > | virtual_size | None > > | > > | visibility | private > > | > > > +------------------+------------------------------------------------------------------------------+ > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > > +--------------------------------------+--------+--------+ > > $ export OS_USERNAME=user2 > > $ export OS_PROJECT_NAME=project2 > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > +--------------------------------------+--------+--------+ > > $ export OS_USERNAME=admin > > $ export OS_PROJECT_NAME=admin > > $ export OS_PASSWORD=xxx > > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > > $ export OS_USERNAME=user2 > > $ export OS_PROJECT_NAME=project2 > > $ export OS_PASSWORD=foo > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > > +--------------------------------------+--------+--------+ > > $ > > > > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > >> openstack user create --domain default --password xxxxxxxx > --project-domain > >> ndc --project test mike > >> > >> > >> openstack role add --user mike --user-domain default --project > test user > >> > >> my admin account is in the NDC domain with a different username. > >> > >> > >> > >> /etc/glance/policy.json > >> { > >> > >> "context_is_admin": "role:admin", > >> "default": "role:admin", > >> > >> > >> > >> > >> I'm not terribly familiar with the policies but I feel like that > default > >> line is making everyone an admin by default? > >> > >> > >> Mike Moore, M.S.S.E. > >> > >> Systems Engineer, Goddard Private Cloud > >> Michael.D.Moore at nasa.gov > >> > >> Hydrogen fusion brightens my day. > >> > >> > >> On 10/18/18, 6:25 PM, "iain MacDonnell" > wrote: > >> > >> > >> I suspect that your non-admin user is not really non-admin. How > did you > >> create it? > >> > >> What you have for "context_is_admin" in glance's policy.json ? > >> > >> ~iain > >> > >> > >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > >> INTEGRA, INC.] wrote: > >>> I have replicated this unexpected behavior in a Pike test > environment, in > >>> addition to our Queens environment. > >>> > >>> > >>> > >>> Mike Moore, M.S.S.E. > >>> > >>> Systems Engineer, Goddard Private Cloud > >>> Michael.D.Moore at nasa.gov > >>> > >>> Hydrogen fusion brightens my day. > >>> > >>> > >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane > (GSFC-720.0)[BUSINESS INTEGRA, > >>> INC.]" wrote: > >>> > >>> Yes. I verified it by creating a non-admin user in a > different tenant. I > >>> created a new image, set to private with the project defined > as our admin > >>> tenant. > >>> > >>> In the database I can see that the image is 'private' and > the owner is the > >>> ID of the admin tenant. > >>> > >>> Mike Moore, M.S.S.E. > >>> > >>> Systems Engineer, Goddard Private Cloud > >>> Michael.D.Moore at nasa.gov > >>> > >>> Hydrogen fusion brightens my day. > >>> > >>> > >>> On 10/18/18, 1:07 AM, "iain MacDonnell" > wrote: > >>> > >>> > >>> > >>> On 10/17/2018 12:29 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS > >>> INTEGRA, INC.] wrote: > >>> > I’m seeing unexpected behavior in our Queens > environment related to > >>> > Glance image visibility. Specifically users who, based > on my > >>> > understanding of the visibility and ownership fields, > should NOT be able > >>> > to see or view the image. > >>> > > >>> > If I create a new image with openstack image create > and specify –project > >>> > and –private a non-admin user in a different > tenant can see and > >>> > boot that image. > >>> > > >>> > That seems to be the opposite of what should happen. > Any ideas? > >>> > >>> Yep, something's not right there. > >>> > >>> Are you sure that the user that can see the image > doesn't have the admin > >>> role (for the project in its keystone token) ? > >>> > >>> Did you verify that the image's owner is what you > intended, and that the > >>> visibility really is "private" ? > >>> > >>> ~iain > >>> > >>> _______________________________________________ > >>> OpenStack-operators mailing list > >>> OpenStack-operators at lists.openstack.org > >>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > >>> > >>> > >>> _______________________________________________ > >>> OpenStack-operators mailing list > >>> OpenStack-operators at lists.openstack.org > >>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > > > ------------------------------ > > Message: 13 > Date: Fri, 19 Oct 2018 16:54:12 +0000 > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > To: Chris Apsey , iain MacDonnell > , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > Non admin users can see private images from other tenants > Message-ID: > Content-Type: text/plain; charset="utf-8" > > > For reference, here is our full glance policy.json > > > { > "context_is_admin": "role:admin", > "default": "role:admin", > > "add_image": "", > "delete_image": "", > "get_image": "", > "get_images": "", > "modify_image": "", > "publicize_image": "role:admin", > "communitize_image": "", > "copy_from": "", > > "download_image": "", > "upload_image": "", > > "delete_image_location": "", > "get_image_location": "", > "set_image_location": "", > > "add_member": "", > "delete_member": "", > "get_member": "", > "get_members": "", > "modify_member": "", > > "manage_image_cache": "role:admin", > > "get_task": "", > "get_tasks": "", > "add_task": "", > "modify_task": "", > "tasks_api_access": "role:admin", > > "deactivate": "", > "reactivate": "", > > "get_metadef_namespace": "", > "get_metadef_namespaces":"", > "modify_metadef_namespace":"", > "add_metadef_namespace":"", > > "get_metadef_object":"", > "get_metadef_objects":"", > "modify_metadef_object":"", > "add_metadef_object":"", > > "list_metadef_resource_types":"", > "get_metadef_resource_type":"", > "add_metadef_resource_type_association":"", > > "get_metadef_property":"", > "get_metadef_properties":"", > "modify_metadef_property":"", > "add_metadef_property":"", > > "get_metadef_tag":"", > "get_metadef_tags":"", > "modify_metadef_tag":"", > "add_metadef_tag":"", > "add_metadef_tags":"" > > } > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/19/18, 12:39 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.]" wrote: > > Our NDC domain is LDAP backed. Default is not. > > Our keystone policy.json file is empty {} > > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > Michael.D.Moore at nasa.gov > > Hydrogen fusion brightens my day. > > > On 10/18/18, 7:24 PM, "Chris Apsey" wrote: > > We are using multiple keystone domains - still can't reproduce > this. > > Do you happen to have a customized keystone policy.json? > > Worst case, I would launch a devstack of your targeted > release. If you > can't reproduce the issue there, you would at least know its > caused by a > nonstandard config rather than a bug (or at least not a bug > that's present > when using a default config) > > On October 18, 2018 18:50:12 iain MacDonnell > > wrote: > > > That all looks fine. > > > > I believe that the "default" policy applies in place of any > that's not > > explicitly specified - i.e. "if there's no matching policy > below, you > > need to have the admin role to be able to do it". I do have > that line in > > my policy.json, and I cannot reproduce your problem (see below). > > > > I'm not using domains (other than "default"). I wonder if > that's a factor... > > > > ~iain > > > > > > $ openstack user create --password foo user1 > > +---------------------+----------------------------------+ > > | Field | Value | > > +---------------------+----------------------------------+ > > | domain_id | default | > > | enabled | True | > > | id | d18c0031ec56430499a2d690cb1f125c | > > | name | user1 | > > | options | {} | > > | password_expires_at | None | > > +---------------------+----------------------------------+ > > $ openstack user create --password foo user2 > > +---------------------+----------------------------------+ > > | Field | Value | > > +---------------------+----------------------------------+ > > | domain_id | default | > > | enabled | True | > > | id | be9f1061a5104abd834eabe98dff055d | > > | name | user2 | > > | options | {} | > > | password_expires_at | None | > > +---------------------+----------------------------------+ > > $ openstack project create project1 > > +-------------+----------------------------------+ > > | Field | Value | > > +-------------+----------------------------------+ > > | description | | > > | domain_id | default | > > | enabled | True | > > | id | 826876d6d3724018bae6253c7f540cb3 | > > | is_domain | False | > > | name | project1 | > > | parent_id | default | > > | tags | [] | > > +-------------+----------------------------------+ > > $ openstack project create project2 > > +-------------+----------------------------------+ > > | Field | Value | > > +-------------+----------------------------------+ > > | description | | > > | domain_id | default | > > | enabled | True | > > | id | b446b93ac6e24d538c1943acbdd13cb2 | > > | is_domain | False | > > | name | project2 | > > | parent_id | default | > > | tags | [] | > > +-------------+----------------------------------+ > > $ openstack role add --user user1 --project project1 _member_ > > $ openstack role add --user user2 --project project2 _member_ > > $ export OS_PASSWORD=foo > > $ export OS_USERNAME=user1 > > $ export OS_PROJECT_NAME=project1 > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > +--------------------------------------+--------+--------+ > > $ openstack image create --private image1 > > > +------------------+------------------------------------------------------------------------------+ > > | Field | Value > > | > > > +------------------+------------------------------------------------------------------------------+ > > | checksum | None > > | > > | container_format | bare > > | > > | created_at | 2018-10-18T22:17:41Z > > | > > | disk_format | raw > > | > > | file | > > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > > | > > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > > | > > | min_disk | 0 > > | > > | min_ram | 0 > > | > > | name | image1 > > | > > | owner | 826876d6d3724018bae6253c7f540cb3 > > | > > | properties | locations='[]', os_hash_algo='None', > > os_hash_value='None', os_hidden='False' | > > | protected | False > > | > > | schema | /v2/schemas/image > > | > > | size | None > > | > > | status | queued > > | > > | tags | > > | > > | updated_at | 2018-10-18T22:17:41Z > > | > > | virtual_size | None > > | > > | visibility | private > > | > > > +------------------+------------------------------------------------------------------------------+ > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > > +--------------------------------------+--------+--------+ > > $ export OS_USERNAME=user2 > > $ export OS_PROJECT_NAME=project2 > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > +--------------------------------------+--------+--------+ > > $ export OS_USERNAME=admin > > $ export OS_PROJECT_NAME=admin > > $ export OS_PASSWORD=xxx > > $ openstack image set --public > 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > > $ export OS_USERNAME=user2 > > $ export OS_PROJECT_NAME=project2 > > $ export OS_PASSWORD=foo > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > > +--------------------------------------+--------+--------+ > > $ > > > > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > >> openstack user create --domain default --password xxxxxxxx > --project-domain > >> ndc --project test mike > >> > >> > >> openstack role add --user mike --user-domain default > --project test user > >> > >> my admin account is in the NDC domain with a different username. > >> > >> > >> > >> /etc/glance/policy.json > >> { > >> > >> "context_is_admin": "role:admin", > >> "default": "role:admin", > >> > >> > >> > >> > >> I'm not terribly familiar with the policies but I feel like > that default > >> line is making everyone an admin by default? > >> > >> > >> Mike Moore, M.S.S.E. > >> > >> Systems Engineer, Goddard Private Cloud > >> Michael.D.Moore at nasa.gov > >> > >> Hydrogen fusion brightens my day. > >> > >> > >> On 10/18/18, 6:25 PM, "iain MacDonnell" > wrote: > >> > >> > >> I suspect that your non-admin user is not really non-admin. > How did you > >> create it? > >> > >> What you have for "context_is_admin" in glance's policy.json ? > >> > >> ~iain > >> > >> > >> On 10/18/2018 03:11 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS > >> INTEGRA, INC.] wrote: > >>> I have replicated this unexpected behavior in a Pike test > environment, in > >>> addition to our Queens environment. > >>> > >>> > >>> > >>> Mike Moore, M.S.S.E. > >>> > >>> Systems Engineer, Goddard Private Cloud > >>> Michael.D.Moore at nasa.gov > >>> > >>> Hydrogen fusion brightens my day. > >>> > >>> > >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane > (GSFC-720.0)[BUSINESS INTEGRA, > >>> INC.]" wrote: > >>> > >>> Yes. I verified it by creating a non-admin user in a > different tenant. I > >>> created a new image, set to private with the project > defined as our admin > >>> tenant. > >>> > >>> In the database I can see that the image is 'private' > and the owner is the > >>> ID of the admin tenant. > >>> > >>> Mike Moore, M.S.S.E. > >>> > >>> Systems Engineer, Goddard Private Cloud > >>> Michael.D.Moore at nasa.gov > >>> > >>> Hydrogen fusion brightens my day. > >>> > >>> > >>> On 10/18/18, 1:07 AM, "iain MacDonnell" > wrote: > >>> > >>> > >>> > >>> On 10/17/2018 12:29 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS > >>> INTEGRA, INC.] wrote: > >>> > I’m seeing unexpected behavior in our Queens > environment related to > >>> > Glance image visibility. Specifically users who, > based on my > >>> > understanding of the visibility and ownership > fields, should NOT be able > >>> > to see or view the image. > >>> > > >>> > If I create a new image with openstack image > create and specify –project > >>> > and –private a non-admin user in a > different tenant can see and > >>> > boot that image. > >>> > > >>> > That seems to be the opposite of what should > happen. Any ideas? > >>> > >>> Yep, something's not right there. > >>> > >>> Are you sure that the user that can see the image > doesn't have the admin > >>> role (for the project in its keystone token) ? > >>> > >>> Did you verify that the image's owner is what you > intended, and that the > >>> visibility really is "private" ? > >>> > >>> ~iain > >>> > >>> _______________________________________________ > >>> OpenStack-operators mailing list > >>> OpenStack-operators at lists.openstack.org > >>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > >>> > >>> > >>> _______________________________________________ > >>> OpenStack-operators mailing list > >>> OpenStack-operators at lists.openstack.org > >>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > ------------------------------ > > Message: 14 > Date: Fri, 19 Oct 2018 13:45:03 -0400 > From: Jay Pipes > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. > 1.1 released > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > Please do not use these mailing lists to advertise > closed-source/proprietary software solutions. > > Thank you, > -jay > > On 10/19/2018 05:42 AM, Adrian Andreias wrote: >> Hello, >> >> We've just released Fleio version 1.1. >> >> Fleio is a billing solution and control panel for OpenStack public >> clouds and traditional web hosters. >> >> Fleio software automates the entire process for cloud users. New >> customers can use Fleio to sign up for an account, pay invoices, add >> credit to their account, as well as create and manage cloud resources >> such as virtual machines, storage and networking. >> >> Full feature list: >> https://fleio.com#features > >> >> You can see an online demo: >> https://fleio.com/demo > >> >> And sign-up for a free trial: >> https://fleio.com/signup > >> >> >> >> Cheers! >> >> - Adrian Andreias >> https://fleio.com > >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > > > > ------------------------------ > > Message: 15 > Date: Fri, 19 Oct 2018 20:13:40 +0200 > From: Mohammed Naser > To: jaypipes at gmail.com > Cc: openstack-operators > Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. > 1.1 released > Message-ID: > > > Content-Type: text/plain; charset="UTF-8" > > On Fri, Oct 19, 2018 at 7:45 PM Jay Pipes wrote: >> >> Please do not use these mailing lists to advertise >> closed-source/proprietary software solutions. > > +1 > >> Thank you, >> -jay >> >> On 10/19/2018 05:42 AM, Adrian Andreias wrote: >> > Hello, >> > >> > We've just released Fleio version 1.1. >> > >> > Fleio is a billing solution and control panel for OpenStack public >> > clouds and traditional web hosters. >> > >> > Fleio software automates the entire process for cloud users. New >> > customers can use Fleio to sign up for an account, pay invoices, add >> > credit to their account, as well as create and manage cloud resources >> > such as virtual machines, storage and networking. >> > >> > Full feature list: >> > https://fleio.com#features > >> > >> > You can see an online demo: >> > https://fleio.com/demo > >> > >> > And sign-up for a free trial: >> > https://fleio.com/signup > >> > >> > >> > >> > Cheers! >> > >> > - Adrian Andreias >> > https://fleio.com > >> > >> > >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > > > > ------------------------------ > > Message: 16 > Date: Fri, 19 Oct 2018 14:39:29 -0400 > From: Erik McCormick > To: openstack-operators > Subject: [Openstack-operators] [Octavia] SSL errors polling amphorae > and missing tenant network interface > Message-ID: > > > Content-Type: text/plain; charset="UTF-8" > > I've been wrestling with getting Octavia up and running and have > become stuck on two issues. I'm hoping someone has run into these > before. My google foo has come up empty. > > Issue 1: > When the Octavia controller tries to poll the amphora instance, it > tries repeatedly and eventually fails. The error on the controller > side is: > > 2018-10-19 14:17:39.181 26 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > retries (currently set to 300) exhausted. The amphora is unavailable. > Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)) > > On the amphora side I see: > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > failure (_ssl.c:1754) > > I've generated certificates both with the script in the Octavia git > repo, and with the Openstack Ansible playbook. I can see that they are > present in /etc/octavia/certs. > > I'm using the Kolla (Queens) containers for the control plane so I'm > sure I've satisfied all the python library constraints. > > Issue 2: > I"m not sure how it gets configured, but the tenant network interface > (ens6) never comes up. I can spawn other instances on that network > with no issue, and I can see that Neutron has the port attached to the > instance. However, in the instance this is all I get: > > ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: ens3: mtu 9000 qdisc pfifo_fast > state UP group default qlen 1000 > link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe30:c460/64 scope link > valid_lft forever preferred_lft forever > 3: ens6: mtu 1500 qdisc noop state DOWN group > default qlen 1000 > link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > There's no evidence of the interface anywhere else including udev rules. > > Any help with either or both issues would be greatly appreciated. > > Cheers, > Erik > > > > ------------------------------ > > Message: 17 > Date: Sat, 20 Oct 2018 01:47:42 +0200 > From: Gaël THEROND > To: Erik McCormick > Cc: openstack-operators > Subject: Re: [Openstack-operators] [Octavia] SSL errors polling > amphorae and missing tenant network interface > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > Hi eric! > > Glad I’m not the only one having this issue with the ssl communication > between the amphora and the CP. > > Even if I don’t yet get a clear answer regarding that issue, I think your > second issue is not an issue as the interface is mounted on a namespace and > so you’ll need to list all nic even those from namespace. > > Use an ip netns ls to get the namespace. > > Hope it will help. > > Le ven. 19 oct. 2018 à 20:40, Erik McCormick a > écrit : > >> I've been wrestling with getting Octavia up and running and have >> become stuck on two issues. I'm hoping someone has run into these >> before. My google foo has come up empty. >> >> Issue 1: >> When the Octavia controller tries to poll the amphora instance, it >> tries repeatedly and eventually fails. The error on the controller >> side is: >> >> 2018-10-19 14:17:39.181 26 ERROR >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection >> retries (currently set to 300) exhausted. The amphora is unavailable. >> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries >> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by >> SSLError(SSLError("bad handshake: Error([('rsa routines', >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >> 'tls_process_server_certificate', 'certificate verify >> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', >> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 >> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >> 'tls_process_server_certificate', 'certificate verify >> failed')],)",),)) >> >> On the amphora side I see: >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from >> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake >> failure (_ssl.c:1754) >> >> I've generated certificates both with the script in the Octavia git >> repo, and with the Openstack Ansible playbook. I can see that they are >> present in /etc/octavia/certs. >> >> I'm using the Kolla (Queens) containers for the control plane so I'm >> sure I've satisfied all the python library constraints. >> >> Issue 2: >> I"m not sure how it gets configured, but the tenant network interface >> (ens6) never comes up. I can spawn other instances on that network >> with no issue, and I can see that Neutron has the port attached to the >> instance. However, in the instance this is all I get: >> >> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> group default qlen 1 >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> inet 127.0.0.1/8 scope host lo >> valid_lft forever preferred_lft forever >> inet6 ::1/128 scope host >> valid_lft forever preferred_lft forever >> 2: ens3: mtu 9000 qdisc pfifo_fast >> state UP group default qlen 1000 >> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff >> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 >> valid_lft forever preferred_lft forever >> inet6 fe80::f816:3eff:fe30:c460/64 scope link >> valid_lft forever preferred_lft forever >> 3: ens6: mtu 1500 qdisc noop state DOWN group >> default qlen 1000 >> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff >> >> There's no evidence of the interface anywhere else including udev rules. >> >> Any help with either or both issues would be greatly appreciated. >> >> Cheers, >> Erik >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ------------------------------ > > End of OpenStack-operators Digest, Vol 96, Issue 7 > ************************************************** > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From florian.engelmann at everyware.ch Wed Oct 24 07:14:53 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Wed, 24 Oct 2018 09:14:53 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> Message-ID: Ohoh - thank you for your empathy :) And those great details about how to setup this mgmt network. I will try to do so this afternoon but solving that routing "puzzle" (virtual network to control nodes) I will need our network guys to help me out... But I will need to tell all Amphorae a static route to the gateway that is routing to the control nodes? Am 10/23/18 um 6:57 PM schrieb Erik McCormick: > So in your other email you said asked if there was a guide for > deploying it with Kolla ansible... > > Oh boy. No there's not. I don't know if you've seen my recent mails on > Octavia, but I am going through this deployment process with > kolla-ansible right now and it is lacking in a few areas. > > If you plan to use different CA certificates for client and server in > Octavia, you'll need to add that into the playbook. Presently it only > copies over ca_01.pem, cacert.key, and client.pem and uses them for > everything. I was completely unable to make it work with only one CA > as I got some SSL errors. It passes gate though, so I aasume it must > work? I dunno. > > Networking comments and a really messy kolla-ansible / octavia how-to below... > > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann > wrote: >> >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann >>> wrote: >>>> >>>> Hi, >>>> >>>> We did test Octavia with Pike (DVR deployment) and everything was >>>> working right our of the box. We changed our underlay network to a >>>> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted >>>> to have that much cables in a rack. >>>> >>>> Octavia is not working right now as the lb-mgmt-net does not exist on >>>> the compute nodes nor does a br-ex. >>>> >>>> The control nodes running >>>> >>>> octavia_worker >>>> octavia_housekeeping >>>> octavia_health_manager >>>> octavia_api >>>> Amphorae-VMs, z.b. lb-mgmt-net 172.16.0.0/16 default GW >>>> and as far as I understood octavia_worker, octavia_housekeeping and >>>> octavia_health_manager have to talk to the amphora instances. But the >>>> control nodes are spread over three different leafs. So each control >>>> node in a different L2 domain. >>>> >>>> So the question is how to deploy a lb-mgmt-net network in our setup? >>>> >>>> - Compute nodes have no "stretched" L2 domain >>>> - Control nodes, compute nodes and network nodes are in L3 networks like >>>> api, storage, ... >>>> - Only network nodes are connected to a L2 domain (with a separated NIC) >>>> providing the "public" network >>>> >>> You'll need to add a new bridge to your compute nodes and create a >>> provider network associated with that bridge. In my setup this is >>> simply a flat network tied to a tagged interface. In your case it >>> probably makes more sense to make a new VNI and create a vxlan >>> provider network. The routing in your switches should handle the rest. >> >> Ok that's what I try right now. But I don't get how to setup something >> like a VxLAN provider Network. I thought only vlan and flat is supported >> as provider network? I guess it is not possible to use the tunnel >> interface that is used for tenant networks? >> So I have to create a separated VxLAN on the control and compute nodes like: >> >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1 >> dev vlan3535 ttl 5 >> # ip addr add 172.16.1.11/20 dev vxoctavia >> # ip link set vxoctavia up >> >> and use it like a flat provider network, true? >> > This is a fine way of doing things, but it's only half the battle. > You'll need to add a bridge on the compute nodes and bind it to that > new interface. Something like this if you're using openvswitch: > > docker exec openvswitch_db > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia > > Also you'll want to remove the IP address from that interface as it's > going to be a bridge. Think of it like your public (br-ex) interface > on your network nodes. > > From there you'll need to update the bridge mappings via kolla > overrides. This would usually be in /etc/kolla/config/neutron. Create > a subdirectory for your compute inventory group and create an > ml2_conf.ini there. So you'd end up with something like: > > [root at kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini > [ml2_type_flat] > flat_networks = mgmt-net > > [ovs] > bridge_mappings = mgmt-net:br-mgmt > > run kolla-ansible --tags neutron reconfigure to push out the new > configs. Note that there is a bug where the neutron containers may not > restart after the change, so you'll probably need to do a 'docker > container restart neutron_openvswitch_agent' on each compute node. > > At this point, you'll need to create the provider network in the admin > project like: > > openstack network create --provider-network-type flat > --provider-physical-network mgmt-net lb-mgmt-net > > And then create a normal subnet attached to this network with some > largeish address scope. I wouldn't use 172.16.0.0/16 because docker > uses that by default. I'm not sure if it matters since the network > traffic will be isolated on a bridge, but it makes me paranoid so I > avoided it. > > For your controllers, I think you can just let everything function off > your api interface since you're routing in your spines. Set up a > gateway somewhere from that lb-mgmt network and save yourself the > complication of adding an interface to your controllers. If you choose > to use a separate interface on your controllers, you'll need to make > sure this patch is in your kolla-ansible install or cherry pick it. > > https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 > > I don't think that's been backported at all, so unless you're running > off master you'll need to go get it. > > From here on out, the regular Octavia instruction should serve you. > Create a flavor, Create a security group, and capture their UUIDs > along with the UUID of the provider network you made. Override them in > globals.yml with: > > octavia_amp_boot_network_list: > octavia_amp_secgroup_list: > octavia_amp_flavor_id: > > This is all from my scattered notes and bad memory. Hopefully it makes > sense. Corrections welcome. > > -Erik > > > >> >> >>> >>> -Erik >>>> >>>> All the best, >>>> Florian >>>> _______________________________________________ >>>> OpenStack-operators mailing list >>>> OpenStack-operators at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From florian.engelmann at everyware.ch Wed Oct 24 07:52:29 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Wed, 24 Oct 2018 09:52:29 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> Message-ID: <6c851f2c-1bc1-4fb8-6a9a-377de4be8247@everyware.ch> Hi Michael, yes I definitely would prefer to build a routed setup. Would it be an option for you to provide some rough step by step "how-to" with openvswitch in a non-DVR setup? All the best, Flo Am 10/23/18 um 7:48 PM schrieb Michael Johnson: > I am still catching up on e-mail from the weekend. > > There are a lot of different options for how to implement the > lb-mgmt-network for the controller<->amphora communication. I can't > talk to what options Kolla provides, but I can talk to how Octavia > works. > > One thing to note on the lb-mgmt-net issue, if you can setup routes > such that the controllers can reach the IP addresses used for the > lb-mgmt-net, and that the amphora can reach the controllers, Octavia > can run with a routed lb-mgmt-net setup. There is no L2 requirement > between the controllers and the amphora instances. > > Michael > > On Tue, Oct 23, 2018 at 9:57 AM Erik McCormick > wrote: >> >> So in your other email you said asked if there was a guide for >> deploying it with Kolla ansible... >> >> Oh boy. No there's not. I don't know if you've seen my recent mails on >> Octavia, but I am going through this deployment process with >> kolla-ansible right now and it is lacking in a few areas. >> >> If you plan to use different CA certificates for client and server in >> Octavia, you'll need to add that into the playbook. Presently it only >> copies over ca_01.pem, cacert.key, and client.pem and uses them for >> everything. I was completely unable to make it work with only one CA >> as I got some SSL errors. It passes gate though, so I aasume it must >> work? I dunno. >> >> Networking comments and a really messy kolla-ansible / octavia how-to below... >> >> On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann >> wrote: >>> >>> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: >>>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann >>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> We did test Octavia with Pike (DVR deployment) and everything was >>>>> working right our of the box. We changed our underlay network to a >>>>> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted >>>>> to have that much cables in a rack. >>>>> >>>>> Octavia is not working right now as the lb-mgmt-net does not exist on >>>>> the compute nodes nor does a br-ex. >>>>> >>>>> The control nodes running >>>>> >>>>> octavia_worker >>>>> octavia_housekeeping >>>>> octavia_health_manager >>>>> octavia_api >>>>> >>>>> and as far as I understood octavia_worker, octavia_housekeeping and >>>>> octavia_health_manager have to talk to the amphora instances. But the >>>>> control nodes are spread over three different leafs. So each control >>>>> node in a different L2 domain. >>>>> >>>>> So the question is how to deploy a lb-mgmt-net network in our setup? >>>>> >>>>> - Compute nodes have no "stretched" L2 domain >>>>> - Control nodes, compute nodes and network nodes are in L3 networks like >>>>> api, storage, ... >>>>> - Only network nodes are connected to a L2 domain (with a separated NIC) >>>>> providing the "public" network >>>>> >>>> You'll need to add a new bridge to your compute nodes and create a >>>> provider network associated with that bridge. In my setup this is >>>> simply a flat network tied to a tagged interface. In your case it >>>> probably makes more sense to make a new VNI and create a vxlan >>>> provider network. The routing in your switches should handle the rest. >>> >>> Ok that's what I try right now. But I don't get how to setup something >>> like a VxLAN provider Network. I thought only vlan and flat is supported >>> as provider network? I guess it is not possible to use the tunnel >>> interface that is used for tenant networks? >>> So I have to create a separated VxLAN on the control and compute nodes like: >>> >>> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1 >>> dev vlan3535 ttl 5 >>> # ip addr add 172.16.1.11/20 dev vxoctavia >>> # ip link set vxoctavia up >>> >>> and use it like a flat provider network, true? >>> >> This is a fine way of doing things, but it's only half the battle. >> You'll need to add a bridge on the compute nodes and bind it to that >> new interface. Something like this if you're using openvswitch: >> >> docker exec openvswitch_db >> /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia >> >> Also you'll want to remove the IP address from that interface as it's >> going to be a bridge. Think of it like your public (br-ex) interface >> on your network nodes. >> >> From there you'll need to update the bridge mappings via kolla >> overrides. This would usually be in /etc/kolla/config/neutron. Create >> a subdirectory for your compute inventory group and create an >> ml2_conf.ini there. So you'd end up with something like: >> >> [root at kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini >> [ml2_type_flat] >> flat_networks = mgmt-net >> >> [ovs] >> bridge_mappings = mgmt-net:br-mgmt >> >> run kolla-ansible --tags neutron reconfigure to push out the new >> configs. Note that there is a bug where the neutron containers may not >> restart after the change, so you'll probably need to do a 'docker >> container restart neutron_openvswitch_agent' on each compute node. >> >> At this point, you'll need to create the provider network in the admin >> project like: >> >> openstack network create --provider-network-type flat >> --provider-physical-network mgmt-net lb-mgmt-net >> >> And then create a normal subnet attached to this network with some >> largeish address scope. I wouldn't use 172.16.0.0/16 because docker >> uses that by default. I'm not sure if it matters since the network >> traffic will be isolated on a bridge, but it makes me paranoid so I >> avoided it. >> >> For your controllers, I think you can just let everything function off >> your api interface since you're routing in your spines. Set up a >> gateway somewhere from that lb-mgmt network and save yourself the >> complication of adding an interface to your controllers. If you choose >> to use a separate interface on your controllers, you'll need to make >> sure this patch is in your kolla-ansible install or cherry pick it. >> >> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 >> >> I don't think that's been backported at all, so unless you're running >> off master you'll need to go get it. >> >> From here on out, the regular Octavia instruction should serve you. >> Create a flavor, Create a security group, and capture their UUIDs >> along with the UUID of the provider network you made. Override them in >> globals.yml with: >> >> octavia_amp_boot_network_list: >> octavia_amp_secgroup_list: >> octavia_amp_flavor_id: >> >> This is all from my scattered notes and bad memory. Hopefully it makes >> sense. Corrections welcome. >> >> -Erik >> >> >> >>> >>> >>>> >>>> -Erik >>>>> >>>>> All the best, >>>>> Florian >>>>> _______________________________________________ >>>>> OpenStack-operators mailing list >>>>> OpenStack-operators at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From florian.engelmann at everyware.ch Wed Oct 24 09:37:07 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Wed, 24 Oct 2018 11:37:07 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: <6c851f2c-1bc1-4fb8-6a9a-377de4be8247@everyware.ch> References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <6c851f2c-1bc1-4fb8-6a9a-377de4be8247@everyware.ch> Message-ID: <31308826-2656-f4ad-bfef-74abb196d7c1@everyware.ch> small update: I am still stuck at the point "how to route the L2 lb-mgmt-net to my different physical L3 networks?". As we wanna distribute our control nodes over diffrent leafs, each with its own L2 domain we will have to route to all of those leafs. Should we enable the compute nodes to route to those controler networks? And how? Am 10/24/18 um 9:52 AM schrieb Florian Engelmann: > Hi Michael, > > yes I definitely would prefer to build a routed setup. Would it be an > option for you to provide some rough step by step "how-to" with > openvswitch in a non-DVR setup? > > All the best, > Flo > > > Am 10/23/18 um 7:48 PM schrieb Michael Johnson: >> I am still catching up on e-mail from the weekend. >> >> There are a lot of different options for how to implement the >> lb-mgmt-network for the controller<->amphora communication. I can't >> talk to what options Kolla provides, but I can talk to how Octavia >> works. >> >> One thing to note on the lb-mgmt-net issue, if you can setup routes >> such that the controllers can reach the IP addresses used for the >> lb-mgmt-net, and that the amphora can reach the controllers, Octavia >> can run with a routed lb-mgmt-net setup. There is no L2 requirement >> between the controllers and the amphora instances. >> >> Michael >> >> On Tue, Oct 23, 2018 at 9:57 AM Erik McCormick >> wrote: >>> >>> So in your other email you said asked if there was a guide for >>> deploying it with Kolla ansible... >>> >>> Oh boy. No there's not. I don't know if you've seen my recent mails on >>> Octavia, but I am going through this deployment process with >>> kolla-ansible right now and it is lacking in a few areas. >>> >>> If you plan to use different CA certificates for client and server in >>> Octavia, you'll need to add that into the playbook. Presently it only >>> copies over ca_01.pem, cacert.key, and client.pem and uses them for >>> everything. I was completely unable to make it work with only one CA >>> as I got some SSL errors. It passes gate though, so I aasume it must >>> work? I dunno. >>> >>> Networking comments and a really messy kolla-ansible / octavia how-to >>> below... >>> >>> On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann >>> wrote: >>>> >>>> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: >>>>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann >>>>> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> We did test Octavia with Pike (DVR deployment) and everything was >>>>>> working right our of the box. We changed our underlay network to a >>>>>> Layer3 spine-leaf network now and did not deploy DVR as we don't >>>>>> wanted >>>>>> to have that much cables in a rack. >>>>>> >>>>>> Octavia is not working right now as the lb-mgmt-net does not exist on >>>>>> the compute nodes nor does a br-ex. >>>>>> >>>>>> The control nodes running >>>>>> >>>>>> octavia_worker >>>>>> octavia_housekeeping >>>>>> octavia_health_manager >>>>>> octavia_api >>>>>> >>>>>> and as far as I understood octavia_worker, octavia_housekeeping and >>>>>> octavia_health_manager have to talk to the amphora instances. But the >>>>>> control nodes are spread over three different leafs. So each control >>>>>> node in a different L2 domain. >>>>>> >>>>>> So the question is how to deploy a lb-mgmt-net network in our setup? >>>>>> >>>>>> - Compute nodes have no "stretched" L2 domain >>>>>> - Control nodes, compute nodes and network nodes are in L3 >>>>>> networks like >>>>>> api, storage, ... >>>>>> - Only network nodes are connected to a L2 domain (with a >>>>>> separated NIC) >>>>>> providing the "public" network >>>>>> >>>>> You'll need to add a new bridge to your compute nodes and create a >>>>> provider network associated with that bridge. In my setup this is >>>>> simply a flat network tied to a tagged interface. In your case it >>>>> probably makes more sense to make a new VNI and create a vxlan >>>>> provider network. The routing in your switches should handle the rest. >>>> >>>> Ok that's what I try right now. But I don't get how to setup something >>>> like a VxLAN provider Network. I thought only vlan and flat is >>>> supported >>>> as provider network? I guess it is not possible to use the tunnel >>>> interface that is used for tenant networks? >>>> So I have to create a separated VxLAN on the control and compute >>>> nodes like: >>>> >>>> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1 >>>> dev vlan3535 ttl 5 >>>> # ip addr add 172.16.1.11/20 dev vxoctavia >>>> # ip link set vxoctavia up >>>> >>>> and use it like a flat provider network, true? >>>> >>> This is a fine way of doing things, but it's only half the battle. >>> You'll need to add a bridge on the compute nodes and bind it to that >>> new interface. Something like this if you're using openvswitch: >>> >>> docker exec openvswitch_db >>> /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia >>> >>> Also you'll want to remove the IP address from that interface as it's >>> going to be a bridge. Think of it like your public (br-ex) interface >>> on your network nodes. >>> >>>  From there you'll need to update the bridge mappings via kolla >>> overrides. This would usually be in /etc/kolla/config/neutron. Create >>> a subdirectory for your compute inventory group and create an >>> ml2_conf.ini there. So you'd end up with something like: >>> >>> [root at kolla-deploy ~]# cat >>> /etc/kolla/config/neutron/compute/ml2_conf.ini >>> [ml2_type_flat] >>> flat_networks = mgmt-net >>> >>> [ovs] >>> bridge_mappings = mgmt-net:br-mgmt >>> >>> run kolla-ansible --tags neutron reconfigure to push out the new >>> configs. Note that there is a bug where the neutron containers may not >>> restart after the change, so you'll probably need to do a 'docker >>> container restart neutron_openvswitch_agent' on each compute node. >>> >>> At this point, you'll need to create the provider network in the admin >>> project like: >>> >>> openstack network create --provider-network-type flat >>> --provider-physical-network mgmt-net lb-mgmt-net >>> >>> And then create a normal subnet attached to this network with some >>> largeish address scope. I wouldn't use 172.16.0.0/16 because docker >>> uses that by default. I'm not sure if it matters since the network >>> traffic will be isolated on a bridge, but it makes me paranoid so I >>> avoided it. >>> >>> For your controllers, I think you can just let everything function off >>> your api interface since you're routing in your spines. Set up a >>> gateway somewhere from that lb-mgmt network and save yourself the >>> complication of adding an interface to your controllers. If you choose >>> to use a separate interface on your controllers, you'll need to make >>> sure this patch is in your kolla-ansible install or cherry pick it. >>> >>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 >>> >>> >>> I don't think that's been backported at all, so unless you're running >>> off master you'll need to go get it. >>> >>>  From here on out, the regular Octavia instruction should serve you. >>> Create a flavor, Create a security group, and capture their UUIDs >>> along with the UUID of the provider network you made. Override them in >>> globals.yml with: >>> >>> octavia_amp_boot_network_list: >>> octavia_amp_secgroup_list: >>> octavia_amp_flavor_id: >>> >>> This is all from my scattered notes and bad memory. Hopefully it makes >>> sense. Corrections welcome. >>> >>> -Erik >>> >>> >>> >>>> >>>> >>>>> >>>>> -Erik >>>>>> >>>>>> All the best, >>>>>> Florian >>>>>> _______________________________________________ >>>>>> OpenStack-operators mailing list >>>>>> OpenStack-operators at lists.openstack.org >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>>>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From gael.therond at gmail.com Wed Oct 24 12:06:06 2018 From: gael.therond at gmail.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 24 Oct 2018 14:06:06 +0200 Subject: [Openstack-operators] [OCTAVIA][QUEENS][KOLLA] - Amphora to Health-manager invalid UDP heartbeat. In-Reply-To: References: Message-ID: Hi Michael, Thanks a lot for those many details regarding the transition between different states, indeed as you said, my LB passed from pending_update to active but I still had an offline status this morning as I still received UDP Packets that HM dropped. When I was talking about the HealthManager reaching to the amphora on port 9443 of course I didn't mean it use the heartbeat key. I just had a look at my Amphora and Octavia CP (Control Plan) versions, seems a little bit off sync as my amphora agent is: *%prog 3.0.0.0b4.dev6* while my octavia CP services are: *%prog 2.0.1* I've just updated to stable/rocky this morning and so jumped to: *%prog 3.0.1* I'll check if I still encounter this issue, but for now my issue seems to have vanished as I've the following messages: *2018-10-24 11:58:54.620 24 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.periodic_health_check' _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:639* *2018-10-24 11:58:57.620 24 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.periodic_health_check' _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:639* *2018-10-24 11:59:00.620 24 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.periodic_health_check' _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:639* *2018-10-24 11:59:03.620 24 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.periodic_health_check' _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:639* *2018-10-24 11:59:04.557 23 DEBUG octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from ('172.27.201.105', 48342) dorecv /usr/lib/python2.7/site-packages/octavia/amphorae/drivers/health/heartbeat_udp.py:187* *2018-10-24 11:59:04.619 45 DEBUG octavia.controller.healthmanager.health_drivers.update_db [-] Health Update finished in: 0.0600640773773 seconds update_health /usr/lib/python2.7/site-packages/octavia/controller/healthmanager/health_drivers/update_db.py:93* I'll update you with my following investigation, but so far, the issue seems to be resolve, I'll tweak a bit the timeouts as my LB take a looooot of time to create Listeners/Pools and come to an online status. Thanks a lot! Le mar. 23 oct. 2018 à 19:09, Michael Johnson a écrit : > Are the controller and the amphora using the same version of Octavia? > > We had a python3 issue where we had to change the HMAC digest used. If > you controller is running an older version of Octavia than your > amphora images, it may not have the compatibility code to support the > new format. The compatibility code is here: > > https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/health_daemon/status_message.py#L56 > > There is also a release note about the issue here: > https://docs.openstack.org/releasenotes/octavia/rocky.html#upgrade-notes > > If that is not the issue, I would double check the heartbeat_key in > the health manager configuration files and inside one of the amphora. > > Note, that this key is only used for health heartbeats and stats, it > is not used for the controller to amphora communication on port 9443. > > Also, load balancers cannot get "stuck" in PENDING_* states unless > someone has killed the controller process that was actively working on > that load balancer. By killed I mean a non-graceful shutdown of the > process that was in the middle of working on the load balancer. > Otherwise all code paths lead back to ACTIVE or ERROR status after it > finishes the work or gives up retrying the requested action. Check > your controller logs to make sure this load balancer is not still > being worked on by one of the controllers. The default retry timeouts > (some are up to 25 minutes) are very long (it will keep trying to > accomplish the request) to accommodate very slow (virtual box) hosts > and the test gates. You will want to tune those down for a production > deployment. > > Michael > > On Tue, Oct 23, 2018 at 7:09 AM Gaël THEROND > wrote: > > > > Hi guys, > > > > I'm finishing to work on my POC for Octavia and after solving few issues > with my configuration I'm close to get a properly working setup. > > However, I'm facing a small but yet annoying bug with the health-manager > receiving amphora heartbeat UDP packet which it consider as not correct and > so drop it. > > > > Here are the messages that can be found in logs: > > > > 2018-10-23 13:53:21.844 25 WARNING > octavia.amphorae.backends.health_daemon.status_message [-] calculated hmac: > faf73e41a0f843b826ee581c3995b7f7e56b5e5a294fca0b84eda426766f8415 not equal > to msg hmac: > 6137613337316432636365393832376431343337306537353066626130653261 dropping > packet > > > > Which come from this part of the HM Code: > > > > > https://docs.openstack.org/octavia/pike/_modules/octavia/amphorae/backends/health_daemon/status_message.html#get_payload > > > > The annoying thing is that I don't get why the UDP packet is considered > as stale and how can I try to reproduce the payload which is send to the > HealthManager. > > I'm willing to write a simple PY program to simulate the heartbeat > payload but I don't now what's exactly the message and I think I miss some > informations. > > > > Both HealthManager and the Amphora do use the same heartbeat_key and > both can contact on the network as the initial Health-manager to Amphora > 9443 connection is validated. > > > > As an effect to this situation, my loadbalancer is stuck in > PENDING_UPDATE mode. > > > > Do you have any idea on how can I handle such thing or if it's something > already seen out there for anyone else? > > > > Kind regards, > > G. > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Wed Oct 24 12:08:09 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 24 Oct 2018 08:08:09 -0400 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> Message-ID: On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann < florian.engelmann at everyware.ch> wrote: > Ohoh - thank you for your empathy :) > And those great details about how to setup this mgmt network. > I will try to do so this afternoon but solving that routing "puzzle" > (virtual network to control nodes) I will need our network guys to help > me out... > > But I will need to tell all Amphorae a static route to the gateway that > is routing to the control nodes? > Just set the default gateway when you create the neutron subnet. No need for excess static routes. The route on the other connection won't interfere with it as it lives in a namespace. > > Am 10/23/18 um 6:57 PM schrieb Erik McCormick: > > So in your other email you said asked if there was a guide for > > deploying it with Kolla ansible... > > > > Oh boy. No there's not. I don't know if you've seen my recent mails on > > Octavia, but I am going through this deployment process with > > kolla-ansible right now and it is lacking in a few areas. > > > > If you plan to use different CA certificates for client and server in > > Octavia, you'll need to add that into the playbook. Presently it only > > copies over ca_01.pem, cacert.key, and client.pem and uses them for > > everything. I was completely unable to make it work with only one CA > > as I got some SSL errors. It passes gate though, so I aasume it must > > work? I dunno. > > > > Networking comments and a really messy kolla-ansible / octavia how-to > below... > > > > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann > > wrote: > >> > >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: > >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann > >>> wrote: > >>>> > >>>> Hi, > >>>> > >>>> We did test Octavia with Pike (DVR deployment) and everything was > >>>> working right our of the box. We changed our underlay network to a > >>>> Layer3 spine-leaf network now and did not deploy DVR as we don't > wanted > >>>> to have that much cables in a rack. > >>>> > >>>> Octavia is not working right now as the lb-mgmt-net does not exist on > >>>> the compute nodes nor does a br-ex. > >>>> > >>>> The control nodes running > >>>> > >>>> octavia_worker > >>>> octavia_housekeeping > >>>> octavia_health_manager > >>>> octavia_api > >>>> > Amphorae-VMs, z.b. > > lb-mgmt-net 172.16.0.0/16 default GW > >>>> and as far as I understood octavia_worker, octavia_housekeeping and > >>>> octavia_health_manager have to talk to the amphora instances. But the > >>>> control nodes are spread over three different leafs. So each control > >>>> node in a different L2 domain. > >>>> > >>>> So the question is how to deploy a lb-mgmt-net network in our setup? > >>>> > >>>> - Compute nodes have no "stretched" L2 domain > >>>> - Control nodes, compute nodes and network nodes are in L3 networks > like > >>>> api, storage, ... > >>>> - Only network nodes are connected to a L2 domain (with a separated > NIC) > >>>> providing the "public" network > >>>> > >>> You'll need to add a new bridge to your compute nodes and create a > >>> provider network associated with that bridge. In my setup this is > >>> simply a flat network tied to a tagged interface. In your case it > >>> probably makes more sense to make a new VNI and create a vxlan > >>> provider network. The routing in your switches should handle the rest. > >> > >> Ok that's what I try right now. But I don't get how to setup something > >> like a VxLAN provider Network. I thought only vlan and flat is supported > >> as provider network? I guess it is not possible to use the tunnel > >> interface that is used for tenant networks? > >> So I have to create a separated VxLAN on the control and compute nodes > like: > >> > >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1 > >> dev vlan3535 ttl 5 > >> # ip addr add 172.16.1.11/20 dev vxoctavia > >> # ip link set vxoctavia up > >> > >> and use it like a flat provider network, true? > >> > > This is a fine way of doing things, but it's only half the battle. > > You'll need to add a bridge on the compute nodes and bind it to that > > new interface. Something like this if you're using openvswitch: > > > > docker exec openvswitch_db > > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia > > > > Also you'll want to remove the IP address from that interface as it's > > going to be a bridge. Think of it like your public (br-ex) interface > > on your network nodes. > > > > From there you'll need to update the bridge mappings via kolla > > overrides. This would usually be in /etc/kolla/config/neutron. Create > > a subdirectory for your compute inventory group and create an > > ml2_conf.ini there. So you'd end up with something like: > > > > [root at kolla-deploy ~]# cat > /etc/kolla/config/neutron/compute/ml2_conf.ini > > [ml2_type_flat] > > flat_networks = mgmt-net > > > > [ovs] > > bridge_mappings = mgmt-net:br-mgmt > > > > run kolla-ansible --tags neutron reconfigure to push out the new > > configs. Note that there is a bug where the neutron containers may not > > restart after the change, so you'll probably need to do a 'docker > > container restart neutron_openvswitch_agent' on each compute node. > > > > At this point, you'll need to create the provider network in the admin > > project like: > > > > openstack network create --provider-network-type flat > > --provider-physical-network mgmt-net lb-mgmt-net > > > > And then create a normal subnet attached to this network with some > > largeish address scope. I wouldn't use 172.16.0.0/16 because docker > > uses that by default. I'm not sure if it matters since the network > > traffic will be isolated on a bridge, but it makes me paranoid so I > > avoided it. > > > > For your controllers, I think you can just let everything function off > > your api interface since you're routing in your spines. Set up a > > gateway somewhere from that lb-mgmt network and save yourself the > > complication of adding an interface to your controllers. If you choose > > to use a separate interface on your controllers, you'll need to make > > sure this patch is in your kolla-ansible install or cherry pick it. > > > > > https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 > > > > I don't think that's been backported at all, so unless you're running > > off master you'll need to go get it. > > > > From here on out, the regular Octavia instruction should serve you. > > Create a flavor, Create a security group, and capture their UUIDs > > along with the UUID of the provider network you made. Override them in > > globals.yml with: > > > > octavia_amp_boot_network_list: > > octavia_amp_secgroup_list: > > octavia_amp_flavor_id: > > > > This is all from my scattered notes and bad memory. Hopefully it makes > > sense. Corrections welcome. > > > > -Erik > > > > > > > >> > >> > >>> > >>> -Erik > >>>> > >>>> All the best, > >>>> Florian > >>>> _______________________________________________ > >>>> OpenStack-operators mailing list > >>>> OpenStack-operators at lists.openstack.org > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- > > EveryWare AG > Florian Engelmann > Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > web: http://www.everyware.ch > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonmills at gmail.com Wed Oct 24 14:46:42 2018 From: jonmills at gmail.com (Jonathan Mills) Date: Wed, 24 Oct 2018 10:46:42 -0400 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: References: Message-ID: Iain et al, Appreciate your feedback. I work with Michael D. Moore on the same cluster, and I'm looking at the is_admin_project thing you pointed out. What I found was that is_project_admin appeared to be True for every single keystone request I found regardless of user. There were no explicit settings in keystone.conf relating to admin_project_name or admin_project_domain_name. In the debug logs, Keystone was setting those values to 'None'. I went ahead and added the following into keystone.conf: [resource] admin_project_name = admin admin_project_domain_name = Default Subsequently, I think we are now seeing a different behavior with regard to 'is_admin_project' in keystone requests. For example, here it is with the admin user of the Default domain: [root at vm013 ~]# openstack --debug image list 2>&1|grep 'is_admin_project' {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "122576ec3bee490aaec8ff664a9446b4", "name": "admin"}], "is_admin_project": true, "project": {"domain": {"id": "default", "name": "Default"}, "id": "ac5b283406ff429291b4b4e958adca3f", "name": "admin"}, And here it is again as the non-admin user 'jonathan' in the Default domain: [root at vm013 ~]# . keystonerc_jonathan [root at vm013 ~]# openstack --debug image list 2>&1|grep 'is_admin_project' {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "edc711368e72409ba25c6342ae9c0f80", "name": "user"}], "is_admin_project": false, "project": {"domain": {"id": "d473b9495e13484ab391d6b5799ab0e2", "name": "ndc"}, "id": "b472baecebb24f2f95c7b0c97b34e5c4", "name": "ozoneaq"}, Okay, so that looks good I guess. It is different from before, where is_admin_project was True for both cases. However, this does not seem to have fixed the problem in any way. So I'm thinking that the is_admin_project part might be a red herring. non-admin user jonathan @ Default can still see all glance images, even ones marked private and owned by other tenants. Not knowing where else to go with this, I have opened a bug against Glance: https://bugs.launchpad.net/glance/+bug/1799588 Jonathan On Tue, Oct 23, 2018 at 7:46 PM iain MacDonnell wrote: > > It (still) seems like there's something funky about admin/non-admin in > your case. > > You could try "openstack --debug token issue" (in the admin and > non-admin cases), and examine the token dict that gets output. Look for > the "roles" list and "is_admin_project". > > ~iain > > > > On 10/23/2018 03:21 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.] wrote: > > We have submitted a bug for this > > > > https://bugs.launchpad.net/glance/+bug/1799588 > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.launchpad.net_glance_-2Bbug_1799588&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=Mn2Mcb1CalyYcrdw2IZaS_mFLxT867ZjLCtchHttbP0&e= > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > > > Michael.D.Moore at nasa.gov > > > > ** > > > > Hydrogen fusion brightens my day. > > > > *From: *"Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > > > *Date: *Saturday, October 20, 2018 at 7:22 PM > > *To: *Logan Hicks , > > "openstack-operators at lists.openstack.org" > > > > *Subject: *Re: [Openstack-operators] OpenStack-operators Digest, Vol 96, > > Issue 7 > > > > The images exist and are bootable. I'm going to trace through the actual > > code for glance API. Any suggestions on where the show/hide logic is > > when it filters responses? I'm new to digging through OpenStack code. > > > > ------------------------------------------------------------------------ > > > > *From:*Logan Hicks [logan.hicks at live.com] > > *Sent:* Friday, October 19, 2018 8:00 PM > > *To:* openstack-operators at lists.openstack.org > > *Subject:* Re: [Openstack-operators] OpenStack-operators Digest, Vol 96, > > Issue 7 > > > > Re: Glance Image Visibility Issue? - Non admin users can see > > private images from other tenants (Chris Apsey) > > > > I noticed that the image says queued. If Im not mistaken, an image cant > > have permissions applied until after the image is created, which might > > explain the issue hes seeing. > > > > The object doesnt exist until its made by openstack. > > > > Id check to see if something is holding up images being made. Id start > > with glance. > > > > Respectfully, > > > > Logan Hicks > > > > -------- Original message -------- > > > > From: openstack-operators-request at lists.openstack.org > > > > Date: 10/19/18 7:49 PM (GMT-05:00) > > > > To: openstack-operators at lists.openstack.org > > > > Subject: OpenStack-operators Digest, Vol 96, Issue 7 > > > > Send OpenStack-operators mailing list submissions to > > openstack-operators at lists.openstack.org > > > > To subscribe or unsubscribe via the World Wide Web, visit > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > > > or, via email, send a message with subject or body 'help' to > > openstack-operators-request at lists.openstack.org > > > > You can reach the person managing the list at > > openstack-operators-owner at lists.openstack.org > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of OpenStack-operators digest..." > > > > > > Today's Topics: > > > > 1. [nova] Removing the CachingScheduler (Matt Riedemann) > > 2. Re: Glance Image Visibility Issue? - Non admin users can see > > private images from other tenants > > (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) > > 3. Re: Glance Image Visibility Issue? - Non admin users can see > > private images from other tenants (Chris Apsey) > > 4. Re: Glance Image Visibility Issue? - Non admin users can see > > private images from other tenants (iain MacDonnell) > > 5. Re: Glance Image Visibility Issue? - Non admin users can see > > private images from other tenants > > (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) > > 6. Re: Glance Image Visibility Issue? - Non admin users can see > > private images from other tenants (iain MacDonnell) > > 7. Re: Glance Image Visibility Issue? - Non admin users can see > > private images from other tenants (Chris Apsey) > > 8. osops-tools-monitoring Dependency problems (Tomáš Vondra) > > 9. [heat][cinder] How to create stack snapshot including > volumes > > (Christian Zunker) > > 10. Fleio - OpenStack billing - ver. 1.1 released (Adrian Andreias) > > 11. Re: [Openstack-sigs] [all] Naming the T release of OpenStack > > (Tony Breeds) > > 12. Re: Glance Image Visibility Issue? - Non admin users can see > > private images from other tenants > > (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) > > 13. Re: Glance Image Visibility Issue? - Non admin users can see > > private images from other tenants > > (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) > > 14. Re: Fleio - OpenStack billing - ver. 1.1 released (Jay Pipes) > > 15. Re: Fleio - OpenStack billing - ver. 1.1 released (Mohammed > Naser) > > 16. [Octavia] SSL errors polling amphorae and missing tenant > > network interface (Erik McCormick) > > 17. Re: [Octavia] SSL errors polling amphorae and missing tenant > > network interface (Gaël THEROND) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Thu, 18 Oct 2018 17:07:00 -0500 > > From: Matt Riedemann > > To: "openstack-operators at lists.openstack.org" > > > > Subject: [Openstack-operators] [nova] Removing the CachingScheduler > > Message-ID: > > Content-Type: text/plain; charset=utf-8; format=flowed > > > > It's been deprecated since Pike, and the time has come to remove it [1]. > > > > mgagne has been the most vocal CachingScheduler operator I know and he > > has tested out the "nova-manage placement heal_allocations" CLI, added > > in Rocky, and said it will work for migrating his deployment from the > > CachingScheduler to the FilterScheduler + Placement. > > > > If you are using the CachingScheduler and have a problem with its > > removal, now is the time to speak up or forever hold your peace. > > > > [1] https://review.openstack.org/#/c/611723/1 > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_611723_1&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=CcuJbm96l8_bk_DdPB0xbW_A31hIN4eTR0nqDeQk4kM&e= > > > > > > -- > > > > Thanks, > > > > Matt > > > > > > > > ------------------------------ > > > > Message: 2 > > Date: Thu, 18 Oct 2018 22:11:40 +0000 > > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > > > To: iain MacDonnell , > > "openstack-operators at lists.openstack.org" > > > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > > Non admin users can see private images from other tenants > > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > > > I have replicated this unexpected behavior in a Pike test environment, > > in addition to our Queens environment. > > > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.]" wrote: > > > > Yes. I verified it by creating a non-admin user in a different > > tenant. I created a new image, set to private with the project defined > > as our admin tenant. > > > > In the database I can see that the image is 'private' and the owner > > is the ID of the admin tenant. > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 1:07 AM, "iain MacDonnell" > > wrote: > > > > > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > > > I’m seeing unexpected behavior in our Queens environment > > related to > > > Glance image visibility. Specifically users who, based on my > > > understanding of the visibility and ownership fields, should > > NOT be able > > > to see or view the image. > > > > > > If I create a new image with openstack image create and > > specify –project > > > and –private a non-admin user in a different tenant > > can see and > > > boot that image. > > > > > > That seems to be the opposite of what should happen. Any > ideas? > > > > Yep, something's not right there. > > > > Are you sure that the user that can see the image doesn't have > > the admin > > role (for the project in its keystone token) ? > > > > Did you verify that the image's owner is what you intended, and > > that the > > visibility really is "private" ? > > > > ~iain > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > > > > > > > ------------------------------ > > > > Message: 3 > > Date: Thu, 18 Oct 2018 18:23:35 -0400 > > From: Chris Apsey > > To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > , iain MacDonnell > > , > > > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > > Non admin users can see private images from other tenants > > Message-ID: > > < > 1668946da70.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> > > Content-Type: text/plain; format=flowed; charset="UTF-8" > > > > Do you have a liberal/custom policy.json that perhaps is causing > unexpected > > behavior? Can't seem to reproduce this. > > > > On October 18, 2018 18:13:22 "Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.]" wrote: > > > >> I have replicated this unexpected behavior in a Pike test environment, > in > >> addition to our Queens environment. > >> > >> > >> > >> Mike Moore, M.S.S.E. > >> > >> Systems Engineer, Goddard Private Cloud > >> Michael.D.Moore at nasa.gov > >> > >> Hydrogen fusion brightens my day. > >> > >> > >> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, > >> INC.]" wrote: > >> > >> Yes. I verified it by creating a non-admin user in a different > tenant. I > >> created a new image, set to private with the project defined as our > admin > >> tenant. > >> > >> In the database I can see that the image is 'private' and the owner > is the > >> ID of the admin tenant. > >> > >> Mike Moore, M.S.S.E. > >> > >> Systems Engineer, Goddard Private Cloud > >> Michael.D.Moore at nasa.gov > >> > >> Hydrogen fusion brightens my day. > >> > >> > >> On 10/18/18, 1:07 AM, "iain MacDonnell" > wrote: > >> > >> > >> > >> On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > >> INTEGRA, INC.] wrote: > >>> I’m seeing unexpected behavior in our Queens environment related to > >>> Glance image visibility. Specifically users who, based on my > >>> understanding of the visibility and ownership fields, should NOT be > able > >>> to see or view the image. > >>> > >>> If I create a new image with openstack image create and specify > –project > >>> and –private a non-admin user in a different tenant can see > and > >>> boot that image. > >>> > >>> That seems to be the opposite of what should happen. Any ideas? > >> > >> Yep, something's not right there. > >> > >> Are you sure that the user that can see the image doesn't have > the admin > >> role (for the project in its keystone token) ? > >> > >> Did you verify that the image's owner is what you intended, and > that the > >> visibility really is "private" ? > >> > >> ~iain > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > >> > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > >> > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > > > > > > > > > > > > > ------------------------------ > > > > Message: 4 > > Date: Thu, 18 Oct 2018 15:25:22 -0700 > > From: iain MacDonnell > > To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > , > > "openstack-operators at lists.openstack.org" > > > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > > Non admin users can see private images from other tenants > > Message-ID: <11e3f7a6-875e-4b6c-259a-147188a860e1 at oracle.com> > > Content-Type: text/plain; charset=utf-8; format=flowed > > > > > > I suspect that your non-admin user is not really non-admin. How did you > > create it? > > > > What you have for "context_is_admin" in glance's policy.json ? > > > > ~iain > > > > > > On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > >> I have replicated this unexpected behavior in a Pike test environment, > in addition to our Queens environment. > >> > >> > >> > >> Mike Moore, M.S.S.E. > >> > >> Systems Engineer, Goddard Private Cloud > >> Michael.D.Moore at nasa.gov > >> > >> Hydrogen fusion brightens my day. > >> > >> > >> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.]" wrote: > >> > >> Yes. I verified it by creating a non-admin user in a different > tenant. I created a new image, set to private with the project defined as > our admin tenant. > >> > >> In the database I can see that the image is 'private' and the > owner is the ID of the admin tenant. > >> > >> Mike Moore, M.S.S.E. > >> > >> Systems Engineer, Goddard Private Cloud > >> Michael.D.Moore at nasa.gov > >> > >> Hydrogen fusion brightens my day. > >> > >> > >> On 10/18/18, 1:07 AM, "iain MacDonnell" < > iain.macdonnell at oracle.com> wrote: > >> > >> > >> > >> On 10/17/2018 12:29 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS > >> INTEGRA, INC.] wrote: > >> > I’m seeing unexpected behavior in our Queens environment > related to > >> > Glance image visibility. Specifically users who, based on my > >> > understanding of the visibility and ownership fields, should > NOT be able > >> > to see or view the image. > >> > > >> > If I create a new image with openstack image create and > specify –project > >> > and –private a non-admin user in a different tenant > can see and > >> > boot that image. > >> > > >> > That seems to be the opposite of what should happen. Any > ideas? > >> > >> Yep, something's not right there. > >> > >> Are you sure that the user that can see the image doesn't have > the admin > >> role (for the project in its keystone token) ? > >> > >> Did you verify that the image's owner is what you intended, > and that the > >> visibility really is "private" ? > >> > >> ~iain > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > >> > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > >> > >> > > > > > > > > ------------------------------ > > > > Message: 5 > > Date: Thu, 18 Oct 2018 22:32:42 +0000 > > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > > > To: iain MacDonnell , > > "openstack-operators at lists.openstack.org" > > > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > > Non admin users can see private images from other tenants > > Message-ID: <44085CC4-899C-49B2-9934-0800F6650B0B at nasa.gov> > > Content-Type: text/plain; charset="utf-8" > > > > openstack user create --domain default --password xxxxxxxx > > --project-domain ndc --project test mike > > > > > > openstack role add --user mike --user-domain default --project test user > > > > my admin account is in the NDC domain with a different username. > > > > > > > > /etc/glance/policy.json > > { > > > > "context_is_admin": "role:admin", > > "default": "role:admin", > > > > > > > > > > I'm not terribly familiar with the policies but I feel like that default > > line is making everyone an admin by default? > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 6:25 PM, "iain MacDonnell" > wrote: > > > > > > I suspect that your non-admin user is not really non-admin. How did > > you > > create it? > > > > What you have for "context_is_admin" in glance's policy.json ? > > > > ~iain > > > > > > On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > > > I have replicated this unexpected behavior in a Pike test > > environment, in addition to our Queens environment. > > > > > > > > > > > > Mike Moore, M.S.S.E. > > > > > > Systems Engineer, Goddard Private Cloud > > > Michael.D.Moore at nasa.gov > > > > > > Hydrogen fusion brightens my day. > > > > > > > > > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.]" wrote: > > > > > > Yes. I verified it by creating a non-admin user in a > > different tenant. I created a new image, set to private with the project > > defined as our admin tenant. > > > > > > In the database I can see that the image is 'private' and > > the owner is the ID of the admin tenant. > > > > > > Mike Moore, M.S.S.E. > > > > > > Systems Engineer, Goddard Private Cloud > > > Michael.D.Moore at nasa.gov > > > > > > Hydrogen fusion brightens my day. > > > > > > > > > On 10/18/18, 1:07 AM, "iain MacDonnell" > > wrote: > > > > > > > > > > > > On 10/17/2018 12:29 PM, Moore, Michael Dane > > (GSFC-720.0)[BUSINESS > > > INTEGRA, INC.] wrote: > > > > I’m seeing unexpected behavior in our Queens > > environment related to > > > > Glance image visibility. Specifically users who, based > > on my > > > > understanding of the visibility and ownership fields, > > should NOT be able > > > > to see or view the image. > > > > > > > > If I create a new image with openstack image create > > and specify –project > > > > and –private a non-admin user in a different > > tenant can see and > > > > boot that image. > > > > > > > > That seems to be the opposite of what should happen. > > Any ideas? > > > > > > Yep, something's not right there. > > > > > > Are you sure that the user that can see the image > > doesn't have the admin > > > role (for the project in its keystone token) ? > > > > > > Did you verify that the image's owner is what you > > intended, and that the > > > visibility really is "private" ? > > > > > > ~iain > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > > > > > > > > > ------------------------------ > > > > Message: 6 > > Date: Thu, 18 Oct 2018 15:48:27 -0700 > > From: iain MacDonnell > > To: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > , > > "openstack-operators at lists.openstack.org" > > > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > > Non admin users can see private images from other tenants > > Message-ID: > > Content-Type: text/plain; charset=utf-8; format=flowed > > > > > > That all looks fine. > > > > I believe that the "default" policy applies in place of any that's not > > explicitly specified - i.e. "if there's no matching policy below, you > > need to have the admin role to be able to do it". I do have that line in > > my policy.json, and I cannot reproduce your problem (see below). > > > > I'm not using domains (other than "default"). I wonder if that's a > factor... > > > > ~iain > > > > > > $ openstack user create --password foo user1 > > +---------------------+----------------------------------+ > > | Field | Value | > > +---------------------+----------------------------------+ > > | domain_id | default | > > | enabled | True | > > | id | d18c0031ec56430499a2d690cb1f125c | > > | name | user1 | > > | options | {} | > > | password_expires_at | None | > > +---------------------+----------------------------------+ > > $ openstack user create --password foo user2 > > +---------------------+----------------------------------+ > > | Field | Value | > > +---------------------+----------------------------------+ > > | domain_id | default | > > | enabled | True | > > | id | be9f1061a5104abd834eabe98dff055d | > > | name | user2 | > > | options | {} | > > | password_expires_at | None | > > +---------------------+----------------------------------+ > > $ openstack project create project1 > > +-------------+----------------------------------+ > > | Field | Value | > > +-------------+----------------------------------+ > > | description | | > > | domain_id | default | > > | enabled | True | > > | id | 826876d6d3724018bae6253c7f540cb3 | > > | is_domain | False | > > | name | project1 | > > | parent_id | default | > > | tags | [] | > > +-------------+----------------------------------+ > > $ openstack project create project2 > > +-------------+----------------------------------+ > > | Field | Value | > > +-------------+----------------------------------+ > > | description | | > > | domain_id | default | > > | enabled | True | > > | id | b446b93ac6e24d538c1943acbdd13cb2 | > > | is_domain | False | > > | name | project2 | > > | parent_id | default | > > | tags | [] | > > +-------------+----------------------------------+ > > $ openstack role add --user user1 --project project1 _member_ > > $ openstack role add --user user2 --project project2 _member_ > > $ export OS_PASSWORD=foo > > $ export OS_USERNAME=user1 > > $ export OS_PROJECT_NAME=project1 > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > +--------------------------------------+--------+--------+ > > $ openstack image create --private image1 > > > +------------------+------------------------------------------------------------------------------+ > > | Field | Value > > | > > > +------------------+------------------------------------------------------------------------------+ > > | checksum | None > > | > > | container_format | bare > > | > > | created_at | 2018-10-18T22:17:41Z > > | > > | disk_format | raw > > | > > | file | > > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > > | > > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > > | > > | min_disk | 0 > > | > > | min_ram | 0 > > | > > | name | image1 > > | > > | owner | 826876d6d3724018bae6253c7f540cb3 > > | > > | properties | locations='[]', os_hash_algo='None', > > os_hash_value='None', os_hidden='False' | > > | protected | False > > | > > | schema | /v2/schemas/image > > | > > | size | None > > | > > | status | queued > > | > > | tags | > > | > > | updated_at | 2018-10-18T22:17:41Z > > | > > | virtual_size | None > > | > > | visibility | private > > | > > > +------------------+------------------------------------------------------------------------------+ > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > > +--------------------------------------+--------+--------+ > > $ export OS_USERNAME=user2 > > $ export OS_PROJECT_NAME=project2 > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > +--------------------------------------+--------+--------+ > > $ export OS_USERNAME=admin > > $ export OS_PROJECT_NAME=admin > > $ export OS_PASSWORD=xxx > > $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > > $ export OS_USERNAME=user2 > > $ export OS_PROJECT_NAME=project2 > > $ export OS_PASSWORD=foo > > $ openstack image list > > +--------------------------------------+--------+--------+ > > | ID | Name | Status | > > +--------------------------------------+--------+--------+ > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > > +--------------------------------------+--------+--------+ > > $ > > > > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.] wrote: > >> openstack user create --domain default --password xxxxxxxx > --project-domain ndc --project test mike > >> > >> > >> openstack role add --user mike --user-domain default --project test user > >> > >> my admin account is in the NDC domain with a different username. > >> > >> > >> > >> /etc/glance/policy.json > >> { > >> > >> "context_is_admin": "role:admin", > >> "default": "role:admin", > >> > >> > >> > >> > >> I'm not terribly familiar with the policies but I feel like that > default line is making everyone an admin by default? > >> > >> > >> Mike Moore, M.S.S.E. > >> > >> Systems Engineer, Goddard Private Cloud > >> Michael.D.Moore at nasa.gov > >> > >> Hydrogen fusion brightens my day. > >> > >> > >> On 10/18/18, 6:25 PM, "iain MacDonnell" > wrote: > >> > >> > >> I suspect that your non-admin user is not really non-admin. How > did you > >> create it? > >> > >> What you have for "context_is_admin" in glance's policy.json ? > >> > >> ~iain > >> > >> > >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > >> INTEGRA, INC.] wrote: > >> > I have replicated this unexpected behavior in a Pike test > environment, in addition to our Queens environment. > >> > > >> > > >> > > >> > Mike Moore, M.S.S.E. > >> > > >> > Systems Engineer, Goddard Private Cloud > >> > Michael.D.Moore at nasa.gov > >> > > >> > Hydrogen fusion brightens my day. > >> > > >> > > >> > On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, INC.]" wrote: > >> > > >> > Yes. I verified it by creating a non-admin user in a > different tenant. I created a new image, set to private with the project > defined as our admin tenant. > >> > > >> > In the database I can see that the image is 'private' and > the owner is the ID of the admin tenant. > >> > > >> > Mike Moore, M.S.S.E. > >> > > >> > Systems Engineer, Goddard Private Cloud > >> > Michael.D.Moore at nasa.gov > >> > > >> > Hydrogen fusion brightens my day. > >> > > >> > > >> > On 10/18/18, 1:07 AM, "iain MacDonnell" < > iain.macdonnell at oracle.com> wrote: > >> > > >> > > >> > > >> > On 10/17/2018 12:29 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS > >> > INTEGRA, INC.] wrote: > >> > > I’m seeing unexpected behavior in our Queens > environment related to > >> > > Glance image visibility. Specifically users who, > based on my > >> > > understanding of the visibility and ownership fields, > should NOT be able > >> > > to see or view the image. > >> > > > >> > > If I create a new image with openstack image create > and specify –project > >> > > and –private a non-admin user in a different > tenant can see and > >> > > boot that image. > >> > > > >> > > That seems to be the opposite of what should happen. > Any ideas? > >> > > >> > Yep, something's not right there. > >> > > >> > Are you sure that the user that can see the image > doesn't have the admin > >> > role (for the project in its keystone token) ? > >> > > >> > Did you verify that the image's owner is what you > intended, and that the > >> > visibility really is "private" ? > >> > > >> > ~iain > >> > > >> > _______________________________________________ > >> > OpenStack-operators mailing list > >> > OpenStack-operators at lists.openstack.org > >> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > >> > > >> > > >> > _______________________________________________ > >> > OpenStack-operators mailing list > >> > OpenStack-operators at lists.openstack.org > >> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > >> > > >> > > >> > >> > > > > > > > > ------------------------------ > > > > Message: 7 > > Date: Thu, 18 Oct 2018 19:23:42 -0400 > > From: Chris Apsey > > To: iain MacDonnell , "Moore, Michael Dane > > (GSFC-720.0)[BUSINESS INTEGRA, INC.]" >, > > > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > > Non admin users can see private images from other tenants > > Message-ID: > > < > 166897de830.278c.5f0d7f2baa7831a2bbe6450f254d9a24 at bitskrieg.net> > > Content-Type: text/plain; format=flowed; charset="UTF-8" > > > > We are using multiple keystone domains - still can't reproduce this. > > > > Do you happen to have a customized keystone policy.json? > > > > Worst case, I would launch a devstack of your targeted release. If you > > can't reproduce the issue there, you would at least know its caused by a > > nonstandard config rather than a bug (or at least not a bug that's > present > > when using a default config) > > > > On October 18, 2018 18:50:12 iain MacDonnell > > > wrote: > > > >> That all looks fine. > >> > >> I believe that the "default" policy applies in place of any that's not > >> explicitly specified - i.e. "if there's no matching policy below, you > >> need to have the admin role to be able to do it". I do have that line in > >> my policy.json, and I cannot reproduce your problem (see below). > >> > >> I'm not using domains (other than "default"). I wonder if that's a > factor... > >> > >> ~iain > >> > >> > >> $ openstack user create --password foo user1 > >> +---------------------+----------------------------------+ > >> | Field | Value | > >> +---------------------+----------------------------------+ > >> | domain_id | default | > >> | enabled | True | > >> | id | d18c0031ec56430499a2d690cb1f125c | > >> | name | user1 | > >> | options | {} | > >> | password_expires_at | None | > >> +---------------------+----------------------------------+ > >> $ openstack user create --password foo user2 > >> +---------------------+----------------------------------+ > >> | Field | Value | > >> +---------------------+----------------------------------+ > >> | domain_id | default | > >> | enabled | True | > >> | id | be9f1061a5104abd834eabe98dff055d | > >> | name | user2 | > >> | options | {} | > >> | password_expires_at | None | > >> +---------------------+----------------------------------+ > >> $ openstack project create project1 > >> +-------------+----------------------------------+ > >> | Field | Value | > >> +-------------+----------------------------------+ > >> | description | | > >> | domain_id | default | > >> | enabled | True | > >> | id | 826876d6d3724018bae6253c7f540cb3 | > >> | is_domain | False | > >> | name | project1 | > >> | parent_id | default | > >> | tags | [] | > >> +-------------+----------------------------------+ > >> $ openstack project create project2 > >> +-------------+----------------------------------+ > >> | Field | Value | > >> +-------------+----------------------------------+ > >> | description | | > >> | domain_id | default | > >> | enabled | True | > >> | id | b446b93ac6e24d538c1943acbdd13cb2 | > >> | is_domain | False | > >> | name | project2 | > >> | parent_id | default | > >> | tags | [] | > >> +-------------+----------------------------------+ > >> $ openstack role add --user user1 --project project1 _member_ > >> $ openstack role add --user user2 --project project2 _member_ > >> $ export OS_PASSWORD=foo > >> $ export OS_USERNAME=user1 > >> $ export OS_PROJECT_NAME=project1 > >> $ openstack image list > >> +--------------------------------------+--------+--------+ > >> | ID | Name | Status | > >> +--------------------------------------+--------+--------+ > >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > >> +--------------------------------------+--------+--------+ > >> $ openstack image create --private image1 > >> > +------------------+------------------------------------------------------------------------------+ > >> | Field | Value > >> | > >> > +------------------+------------------------------------------------------------------------------+ > >> | checksum | None > >> | > >> | container_format | bare > >> | > >> | created_at | 2018-10-18T22:17:41Z > >> | > >> | disk_format | raw > >> | > >> | file | > >> /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > >> | > >> | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > >> | > >> | min_disk | 0 > >> | > >> | min_ram | 0 > >> | > >> | name | image1 > >> | > >> | owner | 826876d6d3724018bae6253c7f540cb3 > >> | > >> | properties | locations='[]', os_hash_algo='None', > >> os_hash_value='None', os_hidden='False' | > >> | protected | False > >> | > >> | schema | /v2/schemas/image > >> | > >> | size | None > >> | > >> | status | queued > >> | > >> | tags | > >> | > >> | updated_at | 2018-10-18T22:17:41Z > >> | > >> | virtual_size | None > >> | > >> | visibility | private > >> | > >> > +------------------+------------------------------------------------------------------------------+ > >> $ openstack image list > >> +--------------------------------------+--------+--------+ > >> | ID | Name | Status | > >> +--------------------------------------+--------+--------+ > >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > >> | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > >> +--------------------------------------+--------+--------+ > >> $ export OS_USERNAME=user2 > >> $ export OS_PROJECT_NAME=project2 > >> $ openstack image list > >> +--------------------------------------+--------+--------+ > >> | ID | Name | Status | > >> +--------------------------------------+--------+--------+ > >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > >> +--------------------------------------+--------+--------+ > >> $ export OS_USERNAME=admin > >> $ export OS_PROJECT_NAME=admin > >> $ export OS_PASSWORD=xxx > >> $ openstack image set --public 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > >> $ export OS_USERNAME=user2 > >> $ export OS_PROJECT_NAME=project2 > >> $ export OS_PASSWORD=foo > >> $ openstack image list > >> +--------------------------------------+--------+--------+ > >> | ID | Name | Status | > >> +--------------------------------------+--------+--------+ > >> | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > >> | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > >> +--------------------------------------+--------+--------+ > >> $ > >> > >> > >> On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > >> INTEGRA, INC.] wrote: > >>> openstack user create --domain default --password xxxxxxxx > --project-domain > >>> ndc --project test mike > >>> > >>> > >>> openstack role add --user mike --user-domain default --project test > user > >>> > >>> my admin account is in the NDC domain with a different username. > >>> > >>> > >>> > >>> /etc/glance/policy.json > >>> { > >>> > >>> "context_is_admin": "role:admin", > >>> "default": "role:admin", > >>> > >>> > >>> > >>> > >>> I'm not terribly familiar with the policies but I feel like that > default > >>> line is making everyone an admin by default? > >>> > >>> > >>> Mike Moore, M.S.S.E. > >>> > >>> Systems Engineer, Goddard Private Cloud > >>> Michael.D.Moore at nasa.gov > >>> > >>> Hydrogen fusion brightens my day. > >>> > >>> > >>> On 10/18/18, 6:25 PM, "iain MacDonnell" > wrote: > >>> > >>> > >>> I suspect that your non-admin user is not really non-admin. How did you > >>> create it? > >>> > >>> What you have for "context_is_admin" in glance's policy.json ? > >>> > >>> ~iain > >>> > >>> > >>> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > >>> INTEGRA, INC.] wrote: > >>>> I have replicated this unexpected behavior in a Pike test > environment, in > >>>> addition to our Queens environment. > >>>> > >>>> > >>>> > >>>> Mike Moore, M.S.S.E. > >>>> > >>>> Systems Engineer, Goddard Private Cloud > >>>> Michael.D.Moore at nasa.gov > >>>> > >>>> Hydrogen fusion brightens my day. > >>>> > >>>> > >>>> On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > INTEGRA, > >>>> INC.]" wrote: > >>>> > >>>> Yes. I verified it by creating a non-admin user in a different > tenant. I > >>>> created a new image, set to private with the project defined as > our admin > >>>> tenant. > >>>> > >>>> In the database I can see that the image is 'private' and the > owner is the > >>>> ID of the admin tenant. > >>>> > >>>> Mike Moore, M.S.S.E. > >>>> > >>>> Systems Engineer, Goddard Private Cloud > >>>> Michael.D.Moore at nasa.gov > >>>> > >>>> Hydrogen fusion brightens my day. > >>>> > >>>> > >>>> On 10/18/18, 1:07 AM, "iain MacDonnell" < > iain.macdonnell at oracle.com> wrote: > >>>> > >>>> > >>>> > >>>> On 10/17/2018 12:29 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS > >>>> INTEGRA, INC.] wrote: > >>>> > I’m seeing unexpected behavior in our Queens environment > related to > >>>> > Glance image visibility. Specifically users who, based on my > >>>> > understanding of the visibility and ownership fields, should > NOT be able > >>>> > to see or view the image. > >>>> > > >>>> > If I create a new image with openstack image create and > specify –project > >>>> > and –private a non-admin user in a different tenant > can see and > >>>> > boot that image. > >>>> > > >>>> > That seems to be the opposite of what should happen. Any > ideas? > >>>> > >>>> Yep, something's not right there. > >>>> > >>>> Are you sure that the user that can see the image doesn't have > the admin > >>>> role (for the project in its keystone token) ? > >>>> > >>>> Did you verify that the image's owner is what you intended, > and that the > >>>> visibility really is "private" ? > >>>> > >>>> ~iain > >>>> > >>>> _______________________________________________ > >>>> OpenStack-operators mailing list > >>>> OpenStack-operators at lists.openstack.org > >>>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > >>>> > >>>> > >>>> _______________________________________________ > >>>> OpenStack-operators mailing list > >>>> OpenStack-operators at lists.openstack.org > >>>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > > > > > > > > > > > > > ------------------------------ > > > > Message: 8 > > Date: Fri, 19 Oct 2018 10:58:30 +0200 > > From: Tomáš Vondra > > To: > > Subject: [Openstack-operators] osops-tools-monitoring Dependency > > problems > > Message-ID: <049e01d46789$e8bf5220$ba3df660$@homeatcloud.cz> > > Content-Type: text/plain; charset="iso-8859-2" > > > > Hi! > > I'm a long time user of monitoring-for-openstack, also known as oschecks. > > Concretely, I used a version from 2015 with OpenStack python client > > libraries from Kilo. Now I have upgraded them to Mitaka and it got > broken. > > Even the latest oschecks don't work. I didn't quite expect that, given > that > > there are several commits from this year e.g. by Nagasai Vinaykumar > > Kapalavai and paramite. Can one of them or some other user step up and > say > > what version of OpenStack clients is oschecks working with? Ideally, > write > > it down in requirements.txt so that it will be reproducible? Also, some > > documentation of what is the minimal set of parameters would also come in > > handy. > > Thanks a lot, Tomas from Homeatcloud > > > > The error messages are as absurd as: > > oschecks-check_glance_api --os_auth_url='http://10.1.101.30:5000/v2.0 > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__10.1.101.30-3A5000_v2.0&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=_OahSWkou5-POtvp2P_0PQEAtRXnl_2ry82DIo_ygQ4&e= > >' > > --os_username=monitoring --os_password=XXX --os_tenant_name=monitoring > > > > CRITICAL: Traceback (most recent call last): > > File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 121, > in > > safe_run > > method() > > File "/usr/lib/python2.7/dist-packages/oschecks/glance.py", line 29, > in > > _check_glance_api > > glance = utils.Glance() > > File "/usr/lib/python2.7/dist-packages/oschecks/utils.py", line 177, > in > > __init__ > > self.glance.parser = self.glance.get_base_parser(sys.argv) > > TypeError: get_base_parser() takes exactly 1 argument (2 given) > > > > (I can see 4 parameters on the command line.) > > > > > > > > > > ------------------------------ > > > > Message: 9 > > Date: Fri, 19 Oct 2018 11:21:25 +0200 > > From: Christian Zunker > > To: openstack-operators > > Subject: [Openstack-operators] [heat][cinder] How to create stack > > snapshot including volumes > > Message-ID: > > > > > > Content-Type: text/plain; charset="utf-8" > > > > Hi List, > > > > I'd like to take snapshots of heat stacks including the volumes. > >>From what I found until now, this should be possible. You just have to > > configure some parts of OpenStack. > > > > I enabled cinder-backup with ceph backend. Backups from volumes are > working. > > I configured heat to include the option backups_enabled = True. > > > > When I use openstack stack snapshot create, I get a snapshot but no > backups > > of my volumes. I don't get any error messages in heat. Debug logging > didn't > > help either. > > > > OpenStack version is Pike on Ubuntu installed with openstack-ansible. > > heat version is 9.0.3. So this should also include this bugfix: > > https://bugs.launchpad.net/heat/+bug/1687006 > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.launchpad.net_heat_-2Bbug_1687006&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=GveynPsCtRgNf5xllOIdz2Y5eNCZAvn4B9xEtzLDi1A&e= > > > > > > Is anybody using this feature? What am I missing? > > > > Best regards > > Christian > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > > < > http://lists.openstack.org/pipermail/openstack-operators/attachments/20181019/bb7dd81b/attachment-0001.html > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Doperators_attachments_20181019_bb7dd81b_attachment-2D0001.html&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=YCjjLeySrbifzs2-92NmaHNUG4DFb6Ps4CpFzjdo0ts&e= > >> > > > > ------------------------------ > > > > Message: 10 > > Date: Fri, 19 Oct 2018 12:42:00 +0300 > > From: Adrian Andreias > > To: openstack-operators at lists.openstack.org > > Subject: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 > > released > > Message-ID: > > > > > > Content-Type: text/plain; charset="utf-8" > > > > Hello, > > > > We've just released Fleio version 1.1. > > > > Fleio is a billing solution and control panel for OpenStack public clouds > > and traditional web hosters. > > > > Fleio software automates the entire process for cloud users. New > customers > > can use Fleio to sign up for an account, pay invoices, add credit to > their > > account, as well as create and manage cloud resources such as virtual > > machines, storage and networking. > > > > Full feature list: > > https://fleio.com#features > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com-23features&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=BrOjwRrcQVfBauwf8lZ439skCFkW1CmcZ4NNdTkQDGg&e= > > > > > > You can see an online demo: > > https://fleio.com/demo > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com_demo&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=3Zute5FDzopFoMvqplhIEh9_6wmKOczoeYx4F2Ulni0&e= > > > > > > And sign-up for a free trial: > > https://fleio.com/signup > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com_signup&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=1z9sWcZjZ3HsDnbaK7jH0_WcAJ_ZNSP7fw6hORW00v0&e= > > > > > > > > > > Cheers! > > > > - Adrian Andreias > > https://fleio.com > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=6dlGzWvUN7KbdNbPt3xeMM7tBqWDCXRb0hSyshGhYJM&e= > > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > > < > http://lists.openstack.org/pipermail/openstack-operators/attachments/20181019/3031e47f/attachment-0001.html > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Doperators_attachments_20181019_3031e47f_attachment-2D0001.html&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=JCagcM_ZjfKNMy6hUc9mScnVifU3IZVyccED28OEhpA&e= > >> > > > > ------------------------------ > > > > Message: 11 > > Date: Fri, 19 Oct 2018 20:54:29 +1100 > > From: Tony Breeds > > To: OpenStack Development , > > OpenStack SIGs , OpenStack > > Operators > > Subject: Re: [Openstack-operators] [Openstack-sigs] [all] Naming the T > > release of OpenStack > > Message-ID: <20181019095428.GA9399 at thor.bakeyournoodle.com> > > Content-Type: text/plain; charset="utf-8" > > > > On Thu, Oct 18, 2018 at 05:35:39PM +1100, Tony Breeds wrote: > >> Hello all, > >> As per [1] the nomination period for names for the T release have > >> now closed (actually 3 days ago sorry). The nominated names and any > >> qualifying remarks can be seen at2]. > >> > >> Proposed Names > >> * Tarryall > >> * Teakettle > >> * Teller > >> * Telluride > >> * Thomas > >> * Thornton > >> * Tiger > >> * Tincup > >> * Timnath > >> * Timber > >> * Tiny Town > >> * Torreys > >> * Trail > >> * Trinidad > >> * Treasure > >> * Troublesome > >> * Trussville > >> * Turret > >> * Tyrone > >> > >> Proposed Names that do not meet the criteria > >> * Train > > > > I have re-worked my openstack/governance change[1] to ask the TC to > accept > > adding Train to the poll as (partially) described in [2]. > > > > I present the names above to the community and Foundation marketing team > > for consideration. The list above does contain Train, clearly if the TC > > do not approve [1] Train will not be included in the poll when created. > > > > I apologise for any offence or slight caused by my previous email in > > this thread. It was well intentioned albeit, with hindsight, poorly > > thought through. > > > > Yours Tony. > > > > [1] https://review.openstack.org/#/c/611511/ > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_611511_&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=cRWATGRCwFhRInCOOTmTaFGPvMXWXznOs1-pnONNMvA&e= > > > > [2] > > > https://governance.openstack.org/tc/reference/release-naming.html#release-name-criteria > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__governance.openstack.org_tc_reference_release-2Dnaming.html-23release-2Dname-2Dcriteria&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=ORBvxW9YNjEKlSx6vbG0BIAOLa6sDtdIw1oWC8aGyvA&e= > > > > -------------- next part -------------- > > A non-text attachment was scrubbed... > > Name: signature.asc > > Type: application/pgp-signature > > Size: 488 bytes > > Desc: not available > > URL: > > < > http://lists.openstack.org/pipermail/openstack-operators/attachments/20181019/49c95d5d/attachment-0001.sig > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Doperators_attachments_20181019_49c95d5d_attachment-2D0001.sig&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=jMzO0p4dD0TpgnxO_HTziQRuWfGZJz4W1oPgADf0iw0&e= > >> > > > > ------------------------------ > > > > Message: 12 > > Date: Fri, 19 Oct 2018 16:33:17 +0000 > > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > > > To: Chris Apsey , iain MacDonnell > > , > > "openstack-operators at lists.openstack.org" > > > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > > Non admin users can see private images from other tenants > > Message-ID: <4704898B-D193-4540-B106-BF38ACAB68E2 at nasa.gov> > > Content-Type: text/plain; charset="utf-8" > > > > Our NDC domain is LDAP backed. Default is not. > > > > Our keystone policy.json file is empty {} > > > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 7:24 PM, "Chris Apsey" wrote: > > > > We are using multiple keystone domains - still can't reproduce this. > > > > Do you happen to have a customized keystone policy.json? > > > > Worst case, I would launch a devstack of your targeted release. If > > you > > can't reproduce the issue there, you would at least know its caused > > by a > > nonstandard config rather than a bug (or at least not a bug that's > > present > > when using a default config) > > > > On October 18, 2018 18:50:12 iain MacDonnell > > > > wrote: > > > > > That all looks fine. > > > > > > I believe that the "default" policy applies in place of any > > that's not > > > explicitly specified - i.e. "if there's no matching policy below, > you > > > need to have the admin role to be able to do it". I do have that > > line in > > > my policy.json, and I cannot reproduce your problem (see below). > > > > > > I'm not using domains (other than "default"). I wonder if that's > > a factor... > > > > > > ~iain > > > > > > > > > $ openstack user create --password foo user1 > > > +---------------------+----------------------------------+ > > > | Field | Value | > > > +---------------------+----------------------------------+ > > > | domain_id | default | > > > | enabled | True | > > > | id | d18c0031ec56430499a2d690cb1f125c | > > > | name | user1 | > > > | options | {} | > > > | password_expires_at | None | > > > +---------------------+----------------------------------+ > > > $ openstack user create --password foo user2 > > > +---------------------+----------------------------------+ > > > | Field | Value | > > > +---------------------+----------------------------------+ > > > | domain_id | default | > > > | enabled | True | > > > | id | be9f1061a5104abd834eabe98dff055d | > > > | name | user2 | > > > | options | {} | > > > | password_expires_at | None | > > > +---------------------+----------------------------------+ > > > $ openstack project create project1 > > > +-------------+----------------------------------+ > > > | Field | Value | > > > +-------------+----------------------------------+ > > > | description | | > > > | domain_id | default | > > > | enabled | True | > > > | id | 826876d6d3724018bae6253c7f540cb3 | > > > | is_domain | False | > > > | name | project1 | > > > | parent_id | default | > > > | tags | [] | > > > +-------------+----------------------------------+ > > > $ openstack project create project2 > > > +-------------+----------------------------------+ > > > | Field | Value | > > > +-------------+----------------------------------+ > > > | description | | > > > | domain_id | default | > > > | enabled | True | > > > | id | b446b93ac6e24d538c1943acbdd13cb2 | > > > | is_domain | False | > > > | name | project2 | > > > | parent_id | default | > > > | tags | [] | > > > +-------------+----------------------------------+ > > > $ openstack role add --user user1 --project project1 _member_ > > > $ openstack role add --user user2 --project project2 _member_ > > > $ export OS_PASSWORD=foo > > > $ export OS_USERNAME=user1 > > > $ export OS_PROJECT_NAME=project1 > > > $ openstack image list > > > +--------------------------------------+--------+--------+ > > > | ID | Name | Status | > > > +--------------------------------------+--------+--------+ > > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > > +--------------------------------------+--------+--------+ > > > $ openstack image create --private image1 > > > > > > +------------------+------------------------------------------------------------------------------+ > > > | Field | Value > > > | > > > > > > +------------------+------------------------------------------------------------------------------+ > > > | checksum | None > > > | > > > | container_format | bare > > > | > > > | created_at | 2018-10-18T22:17:41Z > > > | > > > | disk_format | raw > > > | > > > | file | > > > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > > > | > > > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > > > | > > > | min_disk | 0 > > > | > > > | min_ram | 0 > > > | > > > | name | image1 > > > | > > > | owner | 826876d6d3724018bae6253c7f540cb3 > > > | > > > | properties | locations='[]', os_hash_algo='None', > > > os_hash_value='None', os_hidden='False' | > > > | protected | False > > > | > > > | schema | /v2/schemas/image > > > | > > > | size | None > > > | > > > | status | queued > > > | > > > | tags | > > > | > > > | updated_at | 2018-10-18T22:17:41Z > > > | > > > | virtual_size | None > > > | > > > | visibility | private > > > | > > > > > > +------------------+------------------------------------------------------------------------------+ > > > $ openstack image list > > > +--------------------------------------+--------+--------+ > > > | ID | Name | Status | > > > +--------------------------------------+--------+--------+ > > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > > > +--------------------------------------+--------+--------+ > > > $ export OS_USERNAME=user2 > > > $ export OS_PROJECT_NAME=project2 > > > $ openstack image list > > > +--------------------------------------+--------+--------+ > > > | ID | Name | Status | > > > +--------------------------------------+--------+--------+ > > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > > +--------------------------------------+--------+--------+ > > > $ export OS_USERNAME=admin > > > $ export OS_PROJECT_NAME=admin > > > $ export OS_PASSWORD=xxx > > > $ openstack image set --public > 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > > > $ export OS_USERNAME=user2 > > > $ export OS_PROJECT_NAME=project2 > > > $ export OS_PASSWORD=foo > > > $ openstack image list > > > +--------------------------------------+--------+--------+ > > > | ID | Name | Status | > > > +--------------------------------------+--------+--------+ > > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > > > +--------------------------------------+--------+--------+ > > > $ > > > > > > > > > On 10/18/2018 03:32 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > > INTEGRA, INC.] wrote: > > >> openstack user create --domain default --password xxxxxxxx > > --project-domain > > >> ndc --project test mike > > >> > > >> > > >> openstack role add --user mike --user-domain default --project > > test user > > >> > > >> my admin account is in the NDC domain with a different username. > > >> > > >> > > >> > > >> /etc/glance/policy.json > > >> { > > >> > > >> "context_is_admin": "role:admin", > > >> "default": "role:admin", > > >> > > >> > > >> > > >> > > >> I'm not terribly familiar with the policies but I feel like that > > default > > >> line is making everyone an admin by default? > > >> > > >> > > >> Mike Moore, M.S.S.E. > > >> > > >> Systems Engineer, Goddard Private Cloud > > >> Michael.D.Moore at nasa.gov > > >> > > >> Hydrogen fusion brightens my day. > > >> > > >> > > >> On 10/18/18, 6:25 PM, "iain MacDonnell" > > wrote: > > >> > > >> > > >> I suspect that your non-admin user is not really non-admin. How > > did you > > >> create it? > > >> > > >> What you have for "context_is_admin" in glance's policy.json ? > > >> > > >> ~iain > > >> > > >> > > >> On 10/18/2018 03:11 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS > > >> INTEGRA, INC.] wrote: > > >>> I have replicated this unexpected behavior in a Pike test > > environment, in > > >>> addition to our Queens environment. > > >>> > > >>> > > >>> > > >>> Mike Moore, M.S.S.E. > > >>> > > >>> Systems Engineer, Goddard Private Cloud > > >>> Michael.D.Moore at nasa.gov > > >>> > > >>> Hydrogen fusion brightens my day. > > >>> > > >>> > > >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane > > (GSFC-720.0)[BUSINESS INTEGRA, > > >>> INC.]" wrote: > > >>> > > >>> Yes. I verified it by creating a non-admin user in a > > different tenant. I > > >>> created a new image, set to private with the project defined > > as our admin > > >>> tenant. > > >>> > > >>> In the database I can see that the image is 'private' and > > the owner is the > > >>> ID of the admin tenant. > > >>> > > >>> Mike Moore, M.S.S.E. > > >>> > > >>> Systems Engineer, Goddard Private Cloud > > >>> Michael.D.Moore at nasa.gov > > >>> > > >>> Hydrogen fusion brightens my day. > > >>> > > >>> > > >>> On 10/18/18, 1:07 AM, "iain MacDonnell" > > wrote: > > >>> > > >>> > > >>> > > >>> On 10/17/2018 12:29 PM, Moore, Michael Dane > > (GSFC-720.0)[BUSINESS > > >>> INTEGRA, INC.] wrote: > > >>> > I’m seeing unexpected behavior in our Queens > > environment related to > > >>> > Glance image visibility. Specifically users who, based > > on my > > >>> > understanding of the visibility and ownership fields, > > should NOT be able > > >>> > to see or view the image. > > >>> > > > >>> > If I create a new image with openstack image create > > and specify –project > > >>> > and –private a non-admin user in a different > > tenant can see and > > >>> > boot that image. > > >>> > > > >>> > That seems to be the opposite of what should happen. > > Any ideas? > > >>> > > >>> Yep, something's not right there. > > >>> > > >>> Are you sure that the user that can see the image > > doesn't have the admin > > >>> role (for the project in its keystone token) ? > > >>> > > >>> Did you verify that the image's owner is what you > > intended, and that the > > >>> visibility really is "private" ? > > >>> > > >>> ~iain > > >>> > > >>> _______________________________________________ > > >>> OpenStack-operators mailing list > > >>> OpenStack-operators at lists.openstack.org > > >>> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > >>> > > >>> > > >>> _______________________________________________ > > >>> OpenStack-operators mailing list > > >>> OpenStack-operators at lists.openstack.org > > >>> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > > > > > > > > > > > > > ------------------------------ > > > > Message: 13 > > Date: Fri, 19 Oct 2018 16:54:12 +0000 > > From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > > > To: Chris Apsey , iain MacDonnell > > , > > "openstack-operators at lists.openstack.org" > > > > Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - > > Non admin users can see private images from other tenants > > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > > > > > For reference, here is our full glance policy.json > > > > > > { > > "context_is_admin": "role:admin", > > "default": "role:admin", > > > > "add_image": "", > > "delete_image": "", > > "get_image": "", > > "get_images": "", > > "modify_image": "", > > "publicize_image": "role:admin", > > "communitize_image": "", > > "copy_from": "", > > > > "download_image": "", > > "upload_image": "", > > > > "delete_image_location": "", > > "get_image_location": "", > > "set_image_location": "", > > > > "add_member": "", > > "delete_member": "", > > "get_member": "", > > "get_members": "", > > "modify_member": "", > > > > "manage_image_cache": "role:admin", > > > > "get_task": "", > > "get_tasks": "", > > "add_task": "", > > "modify_task": "", > > "tasks_api_access": "role:admin", > > > > "deactivate": "", > > "reactivate": "", > > > > "get_metadef_namespace": "", > > "get_metadef_namespaces":"", > > "modify_metadef_namespace":"", > > "add_metadef_namespace":"", > > > > "get_metadef_object":"", > > "get_metadef_objects":"", > > "modify_metadef_object":"", > > "add_metadef_object":"", > > > > "list_metadef_resource_types":"", > > "get_metadef_resource_type":"", > > "add_metadef_resource_type_association":"", > > > > "get_metadef_property":"", > > "get_metadef_properties":"", > > "modify_metadef_property":"", > > "add_metadef_property":"", > > > > "get_metadef_tag":"", > > "get_metadef_tags":"", > > "modify_metadef_tag":"", > > "add_metadef_tag":"", > > "add_metadef_tags":"" > > > > } > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/19/18, 12:39 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS > > INTEGRA, INC.]" wrote: > > > > Our NDC domain is LDAP backed. Default is not. > > > > Our keystone policy.json file is empty {} > > > > > > > > Mike Moore, M.S.S.E. > > > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > > > Hydrogen fusion brightens my day. > > > > > > On 10/18/18, 7:24 PM, "Chris Apsey" > wrote: > > > > We are using multiple keystone domains - still can't reproduce > > this. > > > > Do you happen to have a customized keystone policy.json? > > > > Worst case, I would launch a devstack of your targeted > > release. If you > > can't reproduce the issue there, you would at least know its > > caused by a > > nonstandard config rather than a bug (or at least not a bug > > that's present > > when using a default config) > > > > On October 18, 2018 18:50:12 iain MacDonnell > > > > wrote: > > > > > That all looks fine. > > > > > > I believe that the "default" policy applies in place of any > > that's not > > > explicitly specified - i.e. "if there's no matching policy > > below, you > > > need to have the admin role to be able to do it". I do have > > that line in > > > my policy.json, and I cannot reproduce your problem (see > below). > > > > > > I'm not using domains (other than "default"). I wonder if > > that's a factor... > > > > > > ~iain > > > > > > > > > $ openstack user create --password foo user1 > > > +---------------------+----------------------------------+ > > > | Field | Value | > > > +---------------------+----------------------------------+ > > > | domain_id | default | > > > | enabled | True | > > > | id | d18c0031ec56430499a2d690cb1f125c | > > > | name | user1 | > > > | options | {} | > > > | password_expires_at | None | > > > +---------------------+----------------------------------+ > > > $ openstack user create --password foo user2 > > > +---------------------+----------------------------------+ > > > | Field | Value | > > > +---------------------+----------------------------------+ > > > | domain_id | default | > > > | enabled | True | > > > | id | be9f1061a5104abd834eabe98dff055d | > > > | name | user2 | > > > | options | {} | > > > | password_expires_at | None | > > > +---------------------+----------------------------------+ > > > $ openstack project create project1 > > > +-------------+----------------------------------+ > > > | Field | Value | > > > +-------------+----------------------------------+ > > > | description | | > > > | domain_id | default | > > > | enabled | True | > > > | id | 826876d6d3724018bae6253c7f540cb3 | > > > | is_domain | False | > > > | name | project1 | > > > | parent_id | default | > > > | tags | [] | > > > +-------------+----------------------------------+ > > > $ openstack project create project2 > > > +-------------+----------------------------------+ > > > | Field | Value | > > > +-------------+----------------------------------+ > > > | description | | > > > | domain_id | default | > > > | enabled | True | > > > | id | b446b93ac6e24d538c1943acbdd13cb2 | > > > | is_domain | False | > > > | name | project2 | > > > | parent_id | default | > > > | tags | [] | > > > +-------------+----------------------------------+ > > > $ openstack role add --user user1 --project project1 _member_ > > > $ openstack role add --user user2 --project project2 _member_ > > > $ export OS_PASSWORD=foo > > > $ export OS_USERNAME=user1 > > > $ export OS_PROJECT_NAME=project1 > > > $ openstack image list > > > +--------------------------------------+--------+--------+ > > > | ID | Name | Status | > > > +--------------------------------------+--------+--------+ > > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > > +--------------------------------------+--------+--------+ > > > $ openstack image create --private image1 > > > > > > +------------------+------------------------------------------------------------------------------+ > > > | Field | Value > > > | > > > > > > +------------------+------------------------------------------------------------------------------+ > > > | checksum | None > > > | > > > | container_format | bare > > > | > > > | created_at | 2018-10-18T22:17:41Z > > > | > > > | disk_format | raw > > > | > > > | file | > > > /v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file > > > | > > > | id | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > > > | > > > | min_disk | 0 > > > | > > > | min_ram | 0 > > > | > > > | name | image1 > > > | > > > | owner | 826876d6d3724018bae6253c7f540cb3 > > > | > > > | properties | locations='[]', os_hash_algo='None', > > > os_hash_value='None', os_hidden='False' | > > > | protected | False > > > | > > > | schema | /v2/schemas/image > > > | > > > | size | None > > > | > > > | status | queued > > > | > > > | tags | > > > | > > > | updated_at | 2018-10-18T22:17:41Z > > > | > > > | virtual_size | None > > > | > > > | visibility | private > > > | > > > > > > +------------------+------------------------------------------------------------------------------+ > > > $ openstack image list > > > +--------------------------------------+--------+--------+ > > > | ID | Name | Status | > > > +--------------------------------------+--------+--------+ > > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > > > +--------------------------------------+--------+--------+ > > > $ export OS_USERNAME=user2 > > > $ export OS_PROJECT_NAME=project2 > > > $ openstack image list > > > +--------------------------------------+--------+--------+ > > > | ID | Name | Status | > > > +--------------------------------------+--------+--------+ > > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > > +--------------------------------------+--------+--------+ > > > $ export OS_USERNAME=admin > > > $ export OS_PROJECT_NAME=admin > > > $ export OS_PASSWORD=xxx > > > $ openstack image set --public > > 6a0c1928-b79c-4dbf-a9c9-305b599056e4 > > > $ export OS_USERNAME=user2 > > > $ export OS_PROJECT_NAME=project2 > > > $ export OS_PASSWORD=foo > > > $ openstack image list > > > +--------------------------------------+--------+--------+ > > > | ID | Name | Status | > > > +--------------------------------------+--------+--------+ > > > | ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active | > > > | 6a0c1928-b79c-4dbf-a9c9-305b599056e4 | image1 | queued | > > > +--------------------------------------+--------+--------+ > > > $ > > > > > > > > > On 10/18/2018 03:32 PM, Moore, Michael Dane > (GSFC-720.0)[BUSINESS > > > INTEGRA, INC.] wrote: > > >> openstack user create --domain default --password xxxxxxxx > > --project-domain > > >> ndc --project test mike > > >> > > >> > > >> openstack role add --user mike --user-domain default > > --project test user > > >> > > >> my admin account is in the NDC domain with a different > username. > > >> > > >> > > >> > > >> /etc/glance/policy.json > > >> { > > >> > > >> "context_is_admin": "role:admin", > > >> "default": "role:admin", > > >> > > >> > > >> > > >> > > >> I'm not terribly familiar with the policies but I feel like > > that default > > >> line is making everyone an admin by default? > > >> > > >> > > >> Mike Moore, M.S.S.E. > > >> > > >> Systems Engineer, Goddard Private Cloud > > >> Michael.D.Moore at nasa.gov > > >> > > >> Hydrogen fusion brightens my day. > > >> > > >> > > >> On 10/18/18, 6:25 PM, "iain MacDonnell" > > wrote: > > >> > > >> > > >> I suspect that your non-admin user is not really non-admin. > > How did you > > >> create it? > > >> > > >> What you have for "context_is_admin" in glance's policy.json > ? > > >> > > >> ~iain > > >> > > >> > > >> On 10/18/2018 03:11 PM, Moore, Michael Dane > > (GSFC-720.0)[BUSINESS > > >> INTEGRA, INC.] wrote: > > >>> I have replicated this unexpected behavior in a Pike test > > environment, in > > >>> addition to our Queens environment. > > >>> > > >>> > > >>> > > >>> Mike Moore, M.S.S.E. > > >>> > > >>> Systems Engineer, Goddard Private Cloud > > >>> Michael.D.Moore at nasa.gov > > >>> > > >>> Hydrogen fusion brightens my day. > > >>> > > >>> > > >>> On 10/18/18, 2:30 PM, "Moore, Michael Dane > > (GSFC-720.0)[BUSINESS INTEGRA, > > >>> INC.]" wrote: > > >>> > > >>> Yes. I verified it by creating a non-admin user in a > > different tenant. I > > >>> created a new image, set to private with the project > > defined as our admin > > >>> tenant. > > >>> > > >>> In the database I can see that the image is 'private' > > and the owner is the > > >>> ID of the admin tenant. > > >>> > > >>> Mike Moore, M.S.S.E. > > >>> > > >>> Systems Engineer, Goddard Private Cloud > > >>> Michael.D.Moore at nasa.gov > > >>> > > >>> Hydrogen fusion brightens my day. > > >>> > > >>> > > >>> On 10/18/18, 1:07 AM, "iain MacDonnell" > > wrote: > > >>> > > >>> > > >>> > > >>> On 10/17/2018 12:29 PM, Moore, Michael Dane > > (GSFC-720.0)[BUSINESS > > >>> INTEGRA, INC.] wrote: > > >>> > I’m seeing unexpected behavior in our Queens > > environment related to > > >>> > Glance image visibility. Specifically users who, > > based on my > > >>> > understanding of the visibility and ownership > > fields, should NOT be able > > >>> > to see or view the image. > > >>> > > > >>> > If I create a new image with openstack image > > create and specify –project > > >>> > and –private a non-admin user in a > > different tenant can see and > > >>> > boot that image. > > >>> > > > >>> > That seems to be the opposite of what should > > happen. Any ideas? > > >>> > > >>> Yep, something's not right there. > > >>> > > >>> Are you sure that the user that can see the image > > doesn't have the admin > > >>> role (for the project in its keystone token) ? > > >>> > > >>> Did you verify that the image's owner is what you > > intended, and that the > > >>> visibility really is "private" ? > > >>> > > >>> ~iain > > >>> > > >>> _______________________________________________ > > >>> OpenStack-operators mailing list > > >>> OpenStack-operators at lists.openstack.org > > >>> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > >>> > > >>> > > >>> _______________________________________________ > > >>> OpenStack-operators mailing list > > >>> OpenStack-operators at lists.openstack.org > > >>> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=B-M8uELxrmQ5uIYT792YA5rpb5NLAecRQPH_ITY1R5k&s=1KSr8HB8BJJB4-nGHyuZDcQUdssno-bBdbNqswMm6oE&e= > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > > > > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > > > > > > > ------------------------------ > > > > Message: 14 > > Date: Fri, 19 Oct 2018 13:45:03 -0400 > > From: Jay Pipes > > To: openstack-operators at lists.openstack.org > > Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. > > 1.1 released > > Message-ID: > > Content-Type: text/plain; charset=utf-8; format=flowed > > > > Please do not use these mailing lists to advertise > > closed-source/proprietary software solutions. > > > > Thank you, > > -jay > > > > On 10/19/2018 05:42 AM, Adrian Andreias wrote: > >> Hello, > >> > >> We've just released Fleio version 1.1. > >> > >> Fleio is a billing solution and control panel for OpenStack public > >> clouds and traditional web hosters. > >> > >> Fleio software automates the entire process for cloud users. New > >> customers can use Fleio to sign up for an account, pay invoices, add > >> credit to their account, as well as create and manage cloud resources > >> such as virtual machines, storage and networking. > >> > >> Full feature list: > >> https://fleio.com#features > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com-23features&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=BrOjwRrcQVfBauwf8lZ439skCFkW1CmcZ4NNdTkQDGg&e= > > > >> > >> You can see an online demo: > >> https://fleio.com/demo > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com_demo&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=3Zute5FDzopFoMvqplhIEh9_6wmKOczoeYx4F2Ulni0&e= > > > >> > >> And sign-up for a free trial: > >> https://fleio.com/signup > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com_signup&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=1z9sWcZjZ3HsDnbaK7jH0_WcAJ_ZNSP7fw6hORW00v0&e= > > > >> > >> > >> > >> Cheers! > >> > >> - Adrian Andreias > >> https://fleio.com > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=6dlGzWvUN7KbdNbPt3xeMM7tBqWDCXRb0hSyshGhYJM&e= > > > >> > >> > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > >> > > > > > > > > ------------------------------ > > > > Message: 15 > > Date: Fri, 19 Oct 2018 20:13:40 +0200 > > From: Mohammed Naser > > To: jaypipes at gmail.com > > Cc: openstack-operators > > Subject: Re: [Openstack-operators] Fleio - OpenStack billing - ver. > > 1.1 released > > Message-ID: > > > > > > Content-Type: text/plain; charset="UTF-8" > > > > On Fri, Oct 19, 2018 at 7:45 PM Jay Pipes wrote: > >> > >> Please do not use these mailing lists to advertise > >> closed-source/proprietary software solutions. > > > > +1 > > > >> Thank you, > >> -jay > >> > >> On 10/19/2018 05:42 AM, Adrian Andreias wrote: > >> > Hello, > >> > > >> > We've just released Fleio version 1.1. > >> > > >> > Fleio is a billing solution and control panel for OpenStack public > >> > clouds and traditional web hosters. > >> > > >> > Fleio software automates the entire process for cloud users. New > >> > customers can use Fleio to sign up for an account, pay invoices, add > >> > credit to their account, as well as create and manage cloud resources > >> > such as virtual machines, storage and networking. > >> > > >> > Full feature list: > >> > https://fleio.com#features > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com-23features&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=BrOjwRrcQVfBauwf8lZ439skCFkW1CmcZ4NNdTkQDGg&e= > > > >> > > >> > You can see an online demo: > >> > https://fleio.com/demo > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com_demo&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=3Zute5FDzopFoMvqplhIEh9_6wmKOczoeYx4F2Ulni0&e= > > > >> > > >> > And sign-up for a free trial: > >> > https://fleio.com/signup > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com_signup&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=1z9sWcZjZ3HsDnbaK7jH0_WcAJ_ZNSP7fw6hORW00v0&e= > > > >> > > >> > > >> > > >> > Cheers! > >> > > >> > - Adrian Andreias > >> > https://fleio.com > > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__fleio.com&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=6dlGzWvUN7KbdNbPt3xeMM7tBqWDCXRb0hSyshGhYJM&e= > > > >> > > >> > > >> > > >> > _______________________________________________ > >> > OpenStack-operators mailing list > >> > OpenStack-operators at lists.openstack.org > >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > >> > > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > > > > > > > -- > > Mohammed Naser — vexxhost > > ----------------------------------------------------- > > D. 514-316-8872 > > D. 800-910-1726 ext. 200 > > E. mnaser at vexxhost.com > > W. http://vexxhost.com > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__vexxhost.com&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=bq9EPen7RattOa34V0HaOLcBDca21nN47DlkgOKUYMM&e= > > > > > > > > > > ------------------------------ > > > > Message: 16 > > Date: Fri, 19 Oct 2018 14:39:29 -0400 > > From: Erik McCormick > > To: openstack-operators > > Subject: [Openstack-operators] [Octavia] SSL errors polling amphorae > > and missing tenant network interface > > Message-ID: > > > > > > Content-Type: text/plain; charset="UTF-8" > > > > I've been wrestling with getting Octavia up and running and have > > become stuck on two issues. I'm hoping someone has run into these > > before. My google foo has come up empty. > > > > Issue 1: > > When the Octavia controller tries to poll the amphora instance, it > > tries repeatedly and eventually fails. The error on the controller > > side is: > > > > 2018-10-19 14:17:39.181 26 ERROR > > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > > retries (currently set to 300) exhausted. The amphora is unavailable. > > Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > > exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > > SSLError(SSLError("bad handshake: Error([('rsa routines', > > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > > 'tls_process_server_certificate', 'certificate verify > > failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > > port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > > (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > > 'tls_process_server_certificate', 'certificate verify > > failed')],)",),)) > > > > On the amphora side I see: > > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > > ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > > failure (_ssl.c:1754) > > > > I've generated certificates both with the script in the Octavia git > > repo, and with the Openstack Ansible playbook. I can see that they are > > present in /etc/octavia/certs. > > > > I'm using the Kolla (Queens) containers for the control plane so I'm > > sure I've satisfied all the python library constraints. > > > > Issue 2: > > I"m not sure how it gets configured, but the tenant network interface > > (ens6) never comes up. I can spawn other instances on that network > > with no issue, and I can see that Neutron has the port attached to the > > instance. However, in the instance this is all I get: > > > > ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > group default qlen 1 > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > inet6 ::1/128 scope host > > valid_lft forever preferred_lft forever > > 2: ens3: mtu 9000 qdisc pfifo_fast > > state UP group default qlen 1000 > > link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > > inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > > valid_lft forever preferred_lft forever > > inet6 fe80::f816:3eff:fe30:c460/64 scope link > > valid_lft forever preferred_lft forever > > 3: ens6: mtu 1500 qdisc noop state DOWN group > > default qlen 1000 > > link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > > > There's no evidence of the interface anywhere else including udev rules. > > > > Any help with either or both issues would be greatly appreciated. > > > > Cheers, > > Erik > > > > > > > > ------------------------------ > > > > Message: 17 > > Date: Sat, 20 Oct 2018 01:47:42 +0200 > > From: Gaël THEROND > > To: Erik McCormick > > Cc: openstack-operators > > Subject: Re: [Openstack-operators] [Octavia] SSL errors polling > > amphorae and missing tenant network interface > > Message-ID: > > > > > > Content-Type: text/plain; charset="utf-8" > > > > Hi eric! > > > > Glad I’m not the only one having this issue with the ssl communication > > between the amphora and the CP. > > > > Even if I don’t yet get a clear answer regarding that issue, I think your > > second issue is not an issue as the interface is mounted on a namespace > and > > so you’ll need to list all nic even those from namespace. > > > > Use an ip netns ls to get the namespace. > > > > Hope it will help. > > > > Le ven. 19 oct. 2018 à 20:40, Erik McCormick > a > > écrit : > > > >> I've been wrestling with getting Octavia up and running and have > >> become stuck on two issues. I'm hoping someone has run into these > >> before. My google foo has come up empty. > >> > >> Issue 1: > >> When the Octavia controller tries to poll the amphora instance, it > >> tries repeatedly and eventually fails. The error on the controller > >> side is: > >> > >> 2018-10-19 14:17:39.181 26 ERROR > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > >> retries (currently set to 300) exhausted. The amphora is unavailable. > >> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > >> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > >> SSLError(SSLError("bad handshake: Error([('rsa routines', > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > >> 'tls_process_server_certificate', 'certificate verify > >> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > >> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > >> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > >> 'tls_process_server_certificate', 'certificate verify > >> failed')],)",),)) > >> > >> On the amphora side I see: > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > >> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > >> failure (_ssl.c:1754) > >> > >> I've generated certificates both with the script in the Octavia git > >> repo, and with the Openstack Ansible playbook. I can see that they are > >> present in /etc/octavia/certs. > >> > >> I'm using the Kolla (Queens) containers for the control plane so I'm > >> sure I've satisfied all the python library constraints. > >> > >> Issue 2: > >> I"m not sure how it gets configured, but the tenant network interface > >> (ens6) never comes up. I can spawn other instances on that network > >> with no issue, and I can see that Neutron has the port attached to the > >> instance. However, in the instance this is all I get: > >> > >> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > >> group default qlen 1 > >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >> inet 127.0.0.1/8 scope host lo > >> valid_lft forever preferred_lft forever > >> inet6 ::1/128 scope host > >> valid_lft forever preferred_lft forever > >> 2: ens3: mtu 9000 qdisc pfifo_fast > >> state UP group default qlen 1000 > >> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > >> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > >> valid_lft forever preferred_lft forever > >> inet6 fe80::f816:3eff:fe30:c460/64 scope link > >> valid_lft forever preferred_lft forever > >> 3: ens6: mtu 1500 qdisc noop state DOWN group > >> default qlen 1000 > >> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > >> > >> There's no evidence of the interface anywhere else including udev rules. > >> > >> Any help with either or both issues would be greatly appreciated. > >> > >> Cheers, > >> Erik > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > >> > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > > < > http://lists.openstack.org/pipermail/openstack-operators/attachments/20181020/71c8e27a/attachment.html > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Doperators_attachments_20181020_71c8e27a_attachment.html&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=TZjVFI4W3tEBE7QxcsUIhZ92OpBCz-jlpvaQ856vmEw&e= > >> > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > > > > > ------------------------------ > > > > End of OpenStack-operators Digest, Vol 96, Issue 7 > > ************************************************** > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=UMCq1q-ElsVP72_5lCFTGnKxGwn4zkNordf47XiWPYg&s=sAUSoIWeLJ2p07R9PICTtT_OkUTfjNKOngMa8nQunvM&e= > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tech at citoyx.com Wed Oct 24 15:05:36 2018 From: tech at citoyx.com (tech) Date: Wed, 24 Oct 2018 17:05:36 +0200 Subject: [Openstack-operators] Automatic image conversion with glance Message-ID: At work, we are installing a new openstack platform based on Rocky, with Ceph storage. On the previous plateform, we had to manually convert an image to raw format, before using glance image-create. Yesterday, i played with task/taskflow to create a task in glance for converting the image. After some time experimenting i succeeded to make it works. But it doesn't really answer my first need, i really wanted to know if it's possible to setup glance so that whatever the source format supplied for input, it will convert transparently to raw format when creating an image using the standard "glance image-create ..." command. Thank you for the help. -- Dominique Carrel CITOYX SAS -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Wed Oct 24 15:16:10 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 24 Oct 2018 10:16:10 -0500 Subject: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: Forwarding to openstack-operators per Jay. On 10/24/18 10:10, Jay Pipes wrote: > Nova's API has the ability to create "quota classes", which are > basically limits for a set of resource types. There is something called > the "default quota class" which corresponds to the limits in the > CONF.quota section. Quota classes are basically templates of limits to > be applied if the calling project doesn't have any stored > project-specific limits. > > Has anyone ever created a quota class that is different from "default"? > > I'd like to propose deprecating this API and getting rid of this > functionality since it conflicts with the new Keystone /limits endpoint, > is highly coupled with RAX's turnstile middleware and I can't seem to > find anyone who has ever used it. Deprecating this API and functionality > would make the transition to a saner quota management system much easier > and straightforward. > > Also, I'm apparently blocked now from the operators ML so could someone > please forward this there? > > Thanks, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From florian.engelmann at everyware.ch Wed Oct 24 16:02:04 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Wed, 24 Oct 2018 18:02:04 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> Message-ID: <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> Am 10/24/18 um 2:08 PM schrieb Erik McCormick: > > > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann > > > wrote: > > Ohoh - thank you for your empathy :) > And those great details about how to setup this mgmt network. > I will try to do so this afternoon but solving that routing "puzzle" > (virtual network to control nodes) I will need our network guys to help > me out... > > But I will need to tell all Amphorae a static route to the gateway that > is routing to the control nodes? > > > Just set the default gateway when you create the neutron subnet. No need > for excess static routes. The route on the other connection won't > interfere with it as it lives in a namespace. My compute nodes have no br-ex and there is no L2 domain spread over all compute nodes. As far as I understood lb-mgmt-net is a provider network and has to be flat or VLAN and will need a "physical" gateway (as there is no virtual router). So the question - is it possible to get octavia up and running without a br-ex (L2 domain spread over all compute nodes) on the compute nodes? > > > > Am 10/23/18 um 6:57 PM schrieb Erik McCormick: > > So in your other email you said asked if there was a guide for > > deploying it with Kolla ansible... > > > > Oh boy. No there's not. I don't know if you've seen my recent > mails on > > Octavia, but I am going through this deployment process with > > kolla-ansible right now and it is lacking in a few areas. > > > > If you plan to use different CA certificates for client and server in > > Octavia, you'll need to add that into the playbook. Presently it only > > copies over ca_01.pem, cacert.key, and client.pem and uses them for > > everything. I was completely unable to make it work with only one CA > > as I got some SSL errors. It passes gate though, so I aasume it must > > work? I dunno. > > > > Networking comments and a really messy kolla-ansible / octavia > how-to below... > > > > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann > > > wrote: > >> > >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: > >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann > >>> > wrote: > >>>> > >>>> Hi, > >>>> > >>>> We did test Octavia with Pike (DVR deployment) and everything was > >>>> working right our of the box. We changed our underlay network to a > >>>> Layer3 spine-leaf network now and did not deploy DVR as we > don't wanted > >>>> to have that much cables in a rack. > >>>> > >>>> Octavia is not working right now as the lb-mgmt-net does not > exist on > >>>> the compute nodes nor does a br-ex. > >>>> > >>>> The control nodes running > >>>> > >>>> octavia_worker > >>>> octavia_housekeeping > >>>> octavia_health_manager > >>>> octavia_api > >>>> > Amphorae-VMs, z.b. > > lb-mgmt-net 172.16.0.0/16 default GW > >>>> and as far as I understood octavia_worker, > octavia_housekeeping and > >>>> octavia_health_manager have to talk to the amphora instances. > But the > >>>> control nodes are spread over three different leafs. So each > control > >>>> node in a different L2 domain. > >>>> > >>>> So the question is how to deploy a lb-mgmt-net network in our > setup? > >>>> > >>>> - Compute nodes have no "stretched" L2 domain > >>>> - Control nodes, compute nodes and network nodes are in L3 > networks like > >>>> api, storage, ... > >>>> - Only network nodes are connected to a L2 domain (with a > separated NIC) > >>>> providing the "public" network > >>>> > >>> You'll need to add a new bridge to your compute nodes and create a > >>> provider network associated with that bridge. In my setup this is > >>> simply a flat network tied to a tagged interface. In your case it > >>> probably makes more sense to make a new VNI and create a vxlan > >>> provider network. The routing in your switches should handle > the rest. > >> > >> Ok that's what I try right now. But I don't get how to setup > something > >> like a VxLAN provider Network. I thought only vlan and flat is > supported > >> as provider network? I guess it is not possible to use the tunnel > >> interface that is used for tenant networks? > >> So I have to create a separated VxLAN on the control and compute > nodes like: > >> > >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group > 239.1.1.1 > >> dev vlan3535 ttl 5 > >> # ip addr add 172.16.1.11/20 dev vxoctavia > >> # ip link set vxoctavia up > >> > >> and use it like a flat provider network, true? > >> > > This is a fine way of doing things, but it's only half the battle. > > You'll need to add a bridge on the compute nodes and bind it to that > > new interface. Something like this if you're using openvswitch: > > > > docker exec openvswitch_db > > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia > > > > Also you'll want to remove the IP address from that interface as it's > > going to be a bridge. Think of it like your public (br-ex) interface > > on your network nodes. > > > >  From there you'll need to update the bridge mappings via kolla > > overrides. This would usually be in /etc/kolla/config/neutron. Create > > a subdirectory for your compute inventory group and create an > > ml2_conf.ini there. So you'd end up with something like: > > > > [root at kolla-deploy ~]# cat > /etc/kolla/config/neutron/compute/ml2_conf.ini > > [ml2_type_flat] > > flat_networks = mgmt-net > > > > [ovs] > > bridge_mappings = mgmt-net:br-mgmt > > > > run kolla-ansible --tags neutron reconfigure to push out the new > > configs. Note that there is a bug where the neutron containers > may not > > restart after the change, so you'll probably need to do a 'docker > > container restart neutron_openvswitch_agent' on each compute node. > > > > At this point, you'll need to create the provider network in the > admin > > project like: > > > > openstack network create --provider-network-type flat > > --provider-physical-network mgmt-net lb-mgmt-net > > > > And then create a normal subnet attached to this network with some > > largeish address scope. I wouldn't use 172.16.0.0/16 > because docker > > uses that by default. I'm not sure if it matters since the network > > traffic will be isolated on a bridge, but it makes me paranoid so I > > avoided it. > > > > For your controllers, I think you can just let everything > function off > > your api interface since you're routing in your spines. Set up a > > gateway somewhere from that lb-mgmt network and save yourself the > > complication of adding an interface to your controllers. If you > choose > > to use a separate interface on your controllers, you'll need to make > > sure this patch is in your kolla-ansible install or cherry pick it. > > > > > https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 > > > > I don't think that's been backported at all, so unless you're running > > off master you'll need to go get it. > > > >  From here on out, the regular Octavia instruction should serve you. > > Create a flavor, Create a security group, and capture their UUIDs > > along with the UUID of the provider network you made. Override > them in > > globals.yml with: > > > > octavia_amp_boot_network_list: > > octavia_amp_secgroup_list: > > octavia_amp_flavor_id: > > > > This is all from my scattered notes and bad memory. Hopefully it > makes > > sense. Corrections welcome. > > > > -Erik > > > > > > > >> > >> > >>> > >>> -Erik > >>>> > >>>> All the best, > >>>> Florian > >>>> _______________________________________________ > >>>> OpenStack-operators mailing list > >>>> OpenStack-operators at lists.openstack.org > > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- > > EveryWare AG > Florian Engelmann > Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > > web: http://www.everyware.ch > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From emccormick at cirrusseven.com Wed Oct 24 16:18:15 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 24 Oct 2018 12:18:15 -0400 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> Message-ID: On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann < florian.engelmann at everyware.ch> wrote: > Am 10/24/18 um 2:08 PM schrieb Erik McCormick: > > > > > > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann > > > > > > wrote: > > > > Ohoh - thank you for your empathy :) > > And those great details about how to setup this mgmt network. > > I will try to do so this afternoon but solving that routing "puzzle" > > (virtual network to control nodes) I will need our network guys to > help > > me out... > > > > But I will need to tell all Amphorae a static route to the gateway > that > > is routing to the control nodes? > > > > > > Just set the default gateway when you create the neutron subnet. No need > > for excess static routes. The route on the other connection won't > > interfere with it as it lives in a namespace. > > > My compute nodes have no br-ex and there is no L2 domain spread over all > compute nodes. As far as I understood lb-mgmt-net is a provider network > and has to be flat or VLAN and will need a "physical" gateway (as there > is no virtual router). > So the question - is it possible to get octavia up and running without a > br-ex (L2 domain spread over all compute nodes) on the compute nodes? > Sorry, I only meant it was *like* br-ex on your network nodes. You don't need that on your computes. The router here would be whatever does routing in your physical network. Setting the gateway in the neutron subnet simply adds that to the DHCP information sent to the amphorae. This does bring up another thingI forgot though. You'll probably want to add the management network / bridge to your network nodes or wherever you run the DHCP agents. When you create the subnet, be sure to leave some space in the address scope for the physical devices with static IPs. As for multiple L2 domains, I can't think of a way to go about that for the lb-mgmt network. It's a single network with a single subnet. Perhaps you could limit load balancers to an AZ in a single rack? Seems not very HA friendly. > > > > > > > > > Am 10/23/18 um 6:57 PM schrieb Erik McCormick: > > > So in your other email you said asked if there was a guide for > > > deploying it with Kolla ansible... > > > > > > Oh boy. No there's not. I don't know if you've seen my recent > > mails on > > > Octavia, but I am going through this deployment process with > > > kolla-ansible right now and it is lacking in a few areas. > > > > > > If you plan to use different CA certificates for client and > server in > > > Octavia, you'll need to add that into the playbook. Presently it > only > > > copies over ca_01.pem, cacert.key, and client.pem and uses them > for > > > everything. I was completely unable to make it work with only one > CA > > > as I got some SSL errors. It passes gate though, so I aasume it > must > > > work? I dunno. > > > > > > Networking comments and a really messy kolla-ansible / octavia > > how-to below... > > > > > > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann > > > > > wrote: > > >> > > >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: > > >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann > > >>> > > wrote: > > >>>> > > >>>> Hi, > > >>>> > > >>>> We did test Octavia with Pike (DVR deployment) and everything > was > > >>>> working right our of the box. We changed our underlay network > to a > > >>>> Layer3 spine-leaf network now and did not deploy DVR as we > > don't wanted > > >>>> to have that much cables in a rack. > > >>>> > > >>>> Octavia is not working right now as the lb-mgmt-net does not > > exist on > > >>>> the compute nodes nor does a br-ex. > > >>>> > > >>>> The control nodes running > > >>>> > > >>>> octavia_worker > > >>>> octavia_housekeeping > > >>>> octavia_health_manager > > >>>> octavia_api > > >>>> > > Amphorae-VMs, z.b. > > > > lb-mgmt-net 172.16.0.0/16 default GW > > >>>> and as far as I understood octavia_worker, > > octavia_housekeeping and > > >>>> octavia_health_manager have to talk to the amphora instances. > > But the > > >>>> control nodes are spread over three different leafs. So each > > control > > >>>> node in a different L2 domain. > > >>>> > > >>>> So the question is how to deploy a lb-mgmt-net network in our > > setup? > > >>>> > > >>>> - Compute nodes have no "stretched" L2 domain > > >>>> - Control nodes, compute nodes and network nodes are in L3 > > networks like > > >>>> api, storage, ... > > >>>> - Only network nodes are connected to a L2 domain (with a > > separated NIC) > > >>>> providing the "public" network > > >>>> > > >>> You'll need to add a new bridge to your compute nodes and > create a > > >>> provider network associated with that bridge. In my setup this > is > > >>> simply a flat network tied to a tagged interface. In your case > it > > >>> probably makes more sense to make a new VNI and create a vxlan > > >>> provider network. The routing in your switches should handle > > the rest. > > >> > > >> Ok that's what I try right now. But I don't get how to setup > > something > > >> like a VxLAN provider Network. I thought only vlan and flat is > > supported > > >> as provider network? I guess it is not possible to use the tunnel > > >> interface that is used for tenant networks? > > >> So I have to create a separated VxLAN on the control and compute > > nodes like: > > >> > > >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group > > 239.1.1.1 > > >> dev vlan3535 ttl 5 > > >> # ip addr add 172.16.1.11/20 dev > vxoctavia > > >> # ip link set vxoctavia up > > >> > > >> and use it like a flat provider network, true? > > >> > > > This is a fine way of doing things, but it's only half the battle. > > > You'll need to add a bridge on the compute nodes and bind it to > that > > > new interface. Something like this if you're using openvswitch: > > > > > > docker exec openvswitch_db > > > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt > vxoctavia > > > > > > Also you'll want to remove the IP address from that interface as > it's > > > going to be a bridge. Think of it like your public (br-ex) > interface > > > on your network nodes. > > > > > > From there you'll need to update the bridge mappings via kolla > > > overrides. This would usually be in /etc/kolla/config/neutron. > Create > > > a subdirectory for your compute inventory group and create an > > > ml2_conf.ini there. So you'd end up with something like: > > > > > > [root at kolla-deploy ~]# cat > > /etc/kolla/config/neutron/compute/ml2_conf.ini > > > [ml2_type_flat] > > > flat_networks = mgmt-net > > > > > > [ovs] > > > bridge_mappings = mgmt-net:br-mgmt > > > > > > run kolla-ansible --tags neutron reconfigure to push out the new > > > configs. Note that there is a bug where the neutron containers > > may not > > > restart after the change, so you'll probably need to do a 'docker > > > container restart neutron_openvswitch_agent' on each compute node. > > > > > > At this point, you'll need to create the provider network in the > > admin > > > project like: > > > > > > openstack network create --provider-network-type flat > > > --provider-physical-network mgmt-net lb-mgmt-net > > > > > > And then create a normal subnet attached to this network with some > > > largeish address scope. I wouldn't use 172.16.0.0/16 > > because docker > > > uses that by default. I'm not sure if it matters since the network > > > traffic will be isolated on a bridge, but it makes me paranoid so > I > > > avoided it. > > > > > > For your controllers, I think you can just let everything > > function off > > > your api interface since you're routing in your spines. Set up a > > > gateway somewhere from that lb-mgmt network and save yourself the > > > complication of adding an interface to your controllers. If you > > choose > > > to use a separate interface on your controllers, you'll need to > make > > > sure this patch is in your kolla-ansible install or cherry pick > it. > > > > > > > > > https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 > > > > > > I don't think that's been backported at all, so unless you're > running > > > off master you'll need to go get it. > > > > > > From here on out, the regular Octavia instruction should serve > you. > > > Create a flavor, Create a security group, and capture their UUIDs > > > along with the UUID of the provider network you made. Override > > them in > > > globals.yml with: > > > > > > octavia_amp_boot_network_list: > > > octavia_amp_secgroup_list: > > > octavia_amp_flavor_id: > > > > > > This is all from my scattered notes and bad memory. Hopefully it > > makes > > > sense. Corrections welcome. > > > > > > -Erik > > > > > > > > > > > >> > > >> > > >>> > > >>> -Erik > > >>>> > > >>>> All the best, > > >>>> Florian > > >>>> _______________________________________________ > > >>>> OpenStack-operators mailing list > > >>>> OpenStack-operators at lists.openstack.org > > > > >>>> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > -- > > > > EveryWare AG > > Florian Engelmann > > Systems Engineer > > Zurlindenstrasse 52a > > CH-8003 Zürich > > > > tel: +41 44 466 60 00 > > fax: +41 44 466 60 10 > > mail: mailto:florian.engelmann at everyware.ch > > > > web: http://www.everyware.ch > > > > -- > > EveryWare AG > Florian Engelmann > Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > web: http://www.everyware.ch > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendall at openstack.org Wed Oct 24 16:56:11 2018 From: kendall at openstack.org (Kendall Waters) Date: Wed, 24 Oct 2018 11:56:11 -0500 Subject: [Openstack-operators] Registration Prices Increase Today - OpenStack Summit Berlin Message-ID: <04574EB7-40F3-473B-9577-0F53FF5825A3@openstack.org> Hi everyone, Friendly reminder that the ticket price for the OpenStack Summit Berlin increases today, October 24 at 11:59pm PDT (October 25 at 6:59 UTC). Also, ALL registration codes (sponsor, speaker, ATC, AUC) will expire on November 2. Register now before the price increases!  Once you have registered, make sure to download the mobile app and plan your personal Summit schedule . Don’t forget to RSVP to intensive trainings as this is the only way you will be guaranteed a spot in the room! If you have any Summit related questions, please email summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Oct 24 18:57:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 24 Oct 2018 13:57:05 -0500 Subject: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> On 10/24/2018 10:10 AM, Jay Pipes wrote: > I'd like to propose deprecating this API and getting rid of this > functionality since it conflicts with the new Keystone /limits endpoint, > is highly coupled with RAX's turnstile middleware and I can't seem to > find anyone who has ever used it. Deprecating this API and functionality > would make the transition to a saner quota management system much easier > and straightforward. I was trying to do this before it was cool: https://review.openstack.org/#/c/411035/ I think it was the Pike PTG in ATL where people said, "meh, let's just wait for unified limits from keystone and let this rot on the vine". I'd be happy to restore and update that spec. -- Thanks, Matt From florian.engelmann at everyware.ch Wed Oct 24 19:33:04 2018 From: florian.engelmann at everyware.ch (Engelmann Florian) Date: Wed, 24 Oct 2018 19:33:04 +0000 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch>, Message-ID: <1540409584381.85393@everyware.ch> On the network nodes we've got a dedicated interface to deploy VLANs (like the provider network for internet access). What about creating another VLAN on the network nodes, give that bridge a IP which is part of the subnet of lb-mgmt-net and start the octavia worker, healthmanager and controller on the network nodes binding to that IP? ________________________________ From: Erik McCormick Sent: Wednesday, October 24, 2018 6:18 PM To: Engelmann Florian Cc: openstack-operators Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann > wrote: Am 10/24/18 um 2:08 PM schrieb Erik McCormick: > > > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann > >> > wrote: > > Ohoh - thank you for your empathy :) > And those great details about how to setup this mgmt network. > I will try to do so this afternoon but solving that routing "puzzle" > (virtual network to control nodes) I will need our network guys to help > me out... > > But I will need to tell all Amphorae a static route to the gateway that > is routing to the control nodes? > > > Just set the default gateway when you create the neutron subnet. No need > for excess static routes. The route on the other connection won't > interfere with it as it lives in a namespace. My compute nodes have no br-ex and there is no L2 domain spread over all compute nodes. As far as I understood lb-mgmt-net is a provider network and has to be flat or VLAN and will need a "physical" gateway (as there is no virtual router). So the question - is it possible to get octavia up and running without a br-ex (L2 domain spread over all compute nodes) on the compute nodes? Sorry, I only meant it was *like* br-ex on your network nodes. You don't need that on your computes. The router here would be whatever does routing in your physical network. Setting the gateway in the neutron subnet simply adds that to the DHCP information sent to the amphorae. This does bring up another thingI forgot though. You'll probably want to add the management network / bridge to your network nodes or wherever you run the DHCP agents. When you create the subnet, be sure to leave some space in the address scope for the physical devices with static IPs. As for multiple L2 domains, I can't think of a way to go about that for the lb-mgmt network. It's a single network with a single subnet. Perhaps you could limit load balancers to an AZ in a single rack? Seems not very HA friendly. > > > > Am 10/23/18 um 6:57 PM schrieb Erik McCormick: > > So in your other email you said asked if there was a guide for > > deploying it with Kolla ansible... > > > > Oh boy. No there's not. I don't know if you've seen my recent > mails on > > Octavia, but I am going through this deployment process with > > kolla-ansible right now and it is lacking in a few areas. > > > > If you plan to use different CA certificates for client and server in > > Octavia, you'll need to add that into the playbook. Presently it only > > copies over ca_01.pem, cacert.key, and client.pem and uses them for > > everything. I was completely unable to make it work with only one CA > > as I got some SSL errors. It passes gate though, so I aasume it must > > work? I dunno. > > > > Networking comments and a really messy kolla-ansible / octavia > how-to below... > > > > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann > > > >> wrote: > >> > >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: > >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann > >>> > >> wrote: > >>>> > >>>> Hi, > >>>> > >>>> We did test Octavia with Pike (DVR deployment) and everything was > >>>> working right our of the box. We changed our underlay network to a > >>>> Layer3 spine-leaf network now and did not deploy DVR as we > don't wanted > >>>> to have that much cables in a rack. > >>>> > >>>> Octavia is not working right now as the lb-mgmt-net does not > exist on > >>>> the compute nodes nor does a br-ex. > >>>> > >>>> The control nodes running > >>>> > >>>> octavia_worker > >>>> octavia_housekeeping > >>>> octavia_health_manager > >>>> octavia_api > >>>> > Amphorae-VMs, z.b. > > lb-mgmt-net 172.16.0.0/16 default GW > >>>> and as far as I understood octavia_worker, > octavia_housekeeping and > >>>> octavia_health_manager have to talk to the amphora instances. > But the > >>>> control nodes are spread over three different leafs. So each > control > >>>> node in a different L2 domain. > >>>> > >>>> So the question is how to deploy a lb-mgmt-net network in our > setup? > >>>> > >>>> - Compute nodes have no "stretched" L2 domain > >>>> - Control nodes, compute nodes and network nodes are in L3 > networks like > >>>> api, storage, ... > >>>> - Only network nodes are connected to a L2 domain (with a > separated NIC) > >>>> providing the "public" network > >>>> > >>> You'll need to add a new bridge to your compute nodes and create a > >>> provider network associated with that bridge. In my setup this is > >>> simply a flat network tied to a tagged interface. In your case it > >>> probably makes more sense to make a new VNI and create a vxlan > >>> provider network. The routing in your switches should handle > the rest. > >> > >> Ok that's what I try right now. But I don't get how to setup > something > >> like a VxLAN provider Network. I thought only vlan and flat is > supported > >> as provider network? I guess it is not possible to use the tunnel > >> interface that is used for tenant networks? > >> So I have to create a separated VxLAN on the control and compute > nodes like: > >> > >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group > 239.1.1.1 > >> dev vlan3535 ttl 5 > >> # ip addr add 172.16.1.11/20 dev vxoctavia > >> # ip link set vxoctavia up > >> > >> and use it like a flat provider network, true? > >> > > This is a fine way of doing things, but it's only half the battle. > > You'll need to add a bridge on the compute nodes and bind it to that > > new interface. Something like this if you're using openvswitch: > > > > docker exec openvswitch_db > > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia > > > > Also you'll want to remove the IP address from that interface as it's > > going to be a bridge. Think of it like your public (br-ex) interface > > on your network nodes. > > > > From there you'll need to update the bridge mappings via kolla > > overrides. This would usually be in /etc/kolla/config/neutron. Create > > a subdirectory for your compute inventory group and create an > > ml2_conf.ini there. So you'd end up with something like: > > > > [root at kolla-deploy ~]# cat > /etc/kolla/config/neutron/compute/ml2_conf.ini > > [ml2_type_flat] > > flat_networks = mgmt-net > > > > [ovs] > > bridge_mappings = mgmt-net:br-mgmt > > > > run kolla-ansible --tags neutron reconfigure to push out the new > > configs. Note that there is a bug where the neutron containers > may not > > restart after the change, so you'll probably need to do a 'docker > > container restart neutron_openvswitch_agent' on each compute node. > > > > At this point, you'll need to create the provider network in the > admin > > project like: > > > > openstack network create --provider-network-type flat > > --provider-physical-network mgmt-net lb-mgmt-net > > > > And then create a normal subnet attached to this network with some > > largeish address scope. I wouldn't use 172.16.0.0/16 > because docker > > uses that by default. I'm not sure if it matters since the network > > traffic will be isolated on a bridge, but it makes me paranoid so I > > avoided it. > > > > For your controllers, I think you can just let everything > function off > > your api interface since you're routing in your spines. Set up a > > gateway somewhere from that lb-mgmt network and save yourself the > > complication of adding an interface to your controllers. If you > choose > > to use a separate interface on your controllers, you'll need to make > > sure this patch is in your kolla-ansible install or cherry pick it. > > > > > https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 > > > > I don't think that's been backported at all, so unless you're running > > off master you'll need to go get it. > > > > From here on out, the regular Octavia instruction should serve you. > > Create a flavor, Create a security group, and capture their UUIDs > > along with the UUID of the provider network you made. Override > them in > > globals.yml with: > > > > octavia_amp_boot_network_list: > > octavia_amp_secgroup_list: > > octavia_amp_flavor_id: > > > > This is all from my scattered notes and bad memory. Hopefully it > makes > > sense. Corrections welcome. > > > > -Erik > > > > > > > >> > >> > >>> > >>> -Erik > >>>> > >>>> All the best, > >>>> Florian > >>>> _______________________________________________ > >>>> OpenStack-operators mailing list > >>>> OpenStack-operators at lists.openstack.org > > > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- > > EveryWare AG > Florian Engelmann > Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > > > web: http://www.everyware.ch > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From melwittt at gmail.com Wed Oct 24 19:54:00 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 24 Oct 2018 12:54:00 -0700 Subject: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> Message-ID: On Wed, 24 Oct 2018 13:57:05 -0500, Matt Riedemann wrote: > On 10/24/2018 10:10 AM, Jay Pipes wrote: >> I'd like to propose deprecating this API and getting rid of this >> functionality since it conflicts with the new Keystone /limits endpoint, >> is highly coupled with RAX's turnstile middleware and I can't seem to >> find anyone who has ever used it. Deprecating this API and functionality >> would make the transition to a saner quota management system much easier >> and straightforward. > I was trying to do this before it was cool: > > https://review.openstack.org/#/c/411035/ > > I think it was the Pike PTG in ATL where people said, "meh, let's just > wait for unified limits from keystone and let this rot on the vine". > > I'd be happy to restore and update that spec. Yeah, we were thinking the presence of the API and code isn't harming anything and sometimes we talk about situations where we could use them. Quota classes come up occasionally whenever we talk about preemptible instances. Example: we could create and use a quota class "preemptible" and decorate preemptible flavors with that quota_class in order to give them unlimited quota. There's also talk of quota classes in the "Count quota based on resource class" spec [1] where we could have leveraged quota classes to create and enforce quota limits per custom resource class. But I think the consensus there was to hold off on quota by custom resource class until we migrate to unified limits and oslo.limit. So, I think my concern in removing the internal code that is capable of enforcing quota limit per quota class is the preemptible instance use case. I don't have my mind wrapped around if/how we could solve it using unified limits yet. And I was just thinking, if we added a project_id column to the quota_classes table and correspondingly added it to the os-quota-class-sets API, we could pretty simply implement quota by flavor, which is a feature operators like Oath need. An operator could create a quota class limit per project_id and then decorate flavors with quota_class to enforce them per flavor. I recognize that maybe it would be too confusing to solve use cases with quota classes given that we're going to migrate to united limits. At the same time, I'm hesitant to close the door on a possibility before we have some idea about how we'll solve them without quota classes. Has anyone thought about how we can solve the use cases with unified limits for things like preemptible instances and quota by flavor? Cheers, -melanie [1] https://review.openstack.org/569011 From emccormick at cirrusseven.com Wed Oct 24 20:01:30 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 24 Oct 2018 16:01:30 -0400 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: <1540409584381.85393@everyware.ch> References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> <1540409584381.85393@everyware.ch> Message-ID: On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian < florian.engelmann at everyware.ch> wrote: > On the network nodes we've got a dedicated interface to deploy VLANs (like > the provider network for internet access). What about creating another VLAN > on the network nodes, give that bridge a IP which is part of the subnet of > lb-mgmt-net and start the octavia worker, healthmanager and controller on > the network nodes binding to that IP? > > The problem with that is you can't out an IP in the vlan interface and also use it as an OVS bridge, so the Octavia processes would have nothing to bind to. > > ------------------------------ > *From:* Erik McCormick > *Sent:* Wednesday, October 24, 2018 6:18 PM > *To:* Engelmann Florian > *Cc:* openstack-operators > *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN > without DVR > > > > On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann < > florian.engelmann at everyware.ch> wrote: > >> Am 10/24/18 um 2:08 PM schrieb Erik McCormick: >> > >> > >> > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann >> > > >> >> > wrote: >> > >> > Ohoh - thank you for your empathy :) >> > And those great details about how to setup this mgmt network. >> > I will try to do so this afternoon but solving that routing "puzzle" >> > (virtual network to control nodes) I will need our network guys to >> help >> > me out... >> > >> > But I will need to tell all Amphorae a static route to the gateway >> that >> > is routing to the control nodes? >> > >> > >> > Just set the default gateway when you create the neutron subnet. No >> need >> > for excess static routes. The route on the other connection won't >> > interfere with it as it lives in a namespace. >> >> >> My compute nodes have no br-ex and there is no L2 domain spread over all >> compute nodes. As far as I understood lb-mgmt-net is a provider network >> and has to be flat or VLAN and will need a "physical" gateway (as there >> is no virtual router). >> So the question - is it possible to get octavia up and running without a >> br-ex (L2 domain spread over all compute nodes) on the compute nodes? >> > > Sorry, I only meant it was *like* br-ex on your network nodes. You don't > need that on your computes. > > The router here would be whatever does routing in your physical network. > Setting the gateway in the neutron subnet simply adds that to the DHCP > information sent to the amphorae. > > This does bring up another thingI forgot though. You'll probably want to > add the management network / bridge to your network nodes or wherever you > run the DHCP agents. When you create the subnet, be sure to leave some > space in the address scope for the physical devices with static IPs. > > As for multiple L2 domains, I can't think of a way to go about that for > the lb-mgmt network. It's a single network with a single subnet. Perhaps > you could limit load balancers to an AZ in a single rack? Seems not very HA > friendly. > >> >> > > >> > >> > >> > Am 10/23/18 um 6:57 PM schrieb Erik McCormick: >> > > So in your other email you said asked if there was a guide for >> > > deploying it with Kolla ansible... >> > > >> > > Oh boy. No there's not. I don't know if you've seen my recent >> > mails on >> > > Octavia, but I am going through this deployment process with >> > > kolla-ansible right now and it is lacking in a few areas. >> > > >> > > If you plan to use different CA certificates for client and >> server in >> > > Octavia, you'll need to add that into the playbook. Presently it >> only >> > > copies over ca_01.pem, cacert.key, and client.pem and uses them >> for >> > > everything. I was completely unable to make it work with only >> one CA >> > > as I got some SSL errors. It passes gate though, so I aasume it >> must >> > > work? I dunno. >> > > >> > > Networking comments and a really messy kolla-ansible / octavia >> > how-to below... >> > > >> > > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann >> > > > > > wrote: >> > >> >> > >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: >> > >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann >> > >>> > > > wrote: >> > >>>> >> > >>>> Hi, >> > >>>> >> > >>>> We did test Octavia with Pike (DVR deployment) and everything >> was >> > >>>> working right our of the box. We changed our underlay network >> to a >> > >>>> Layer3 spine-leaf network now and did not deploy DVR as we >> > don't wanted >> > >>>> to have that much cables in a rack. >> > >>>> >> > >>>> Octavia is not working right now as the lb-mgmt-net does not >> > exist on >> > >>>> the compute nodes nor does a br-ex. >> > >>>> >> > >>>> The control nodes running >> > >>>> >> > >>>> octavia_worker >> > >>>> octavia_housekeeping >> > >>>> octavia_health_manager >> > >>>> octavia_api >> > >>>> >> > Amphorae-VMs, z.b. >> > >> > lb-mgmt-net 172.16.0.0/16 default GW >> > >>>> and as far as I understood octavia_worker, >> > octavia_housekeeping and >> > >>>> octavia_health_manager have to talk to the amphora instances. >> > But the >> > >>>> control nodes are spread over three different leafs. So each >> > control >> > >>>> node in a different L2 domain. >> > >>>> >> > >>>> So the question is how to deploy a lb-mgmt-net network in our >> > setup? >> > >>>> >> > >>>> - Compute nodes have no "stretched" L2 domain >> > >>>> - Control nodes, compute nodes and network nodes are in L3 >> > networks like >> > >>>> api, storage, ... >> > >>>> - Only network nodes are connected to a L2 domain (with a >> > separated NIC) >> > >>>> providing the "public" network >> > >>>> >> > >>> You'll need to add a new bridge to your compute nodes and >> create a >> > >>> provider network associated with that bridge. In my setup this >> is >> > >>> simply a flat network tied to a tagged interface. In your case >> it >> > >>> probably makes more sense to make a new VNI and create a vxlan >> > >>> provider network. The routing in your switches should handle >> > the rest. >> > >> >> > >> Ok that's what I try right now. But I don't get how to setup >> > something >> > >> like a VxLAN provider Network. I thought only vlan and flat is >> > supported >> > >> as provider network? I guess it is not possible to use the >> tunnel >> > >> interface that is used for tenant networks? >> > >> So I have to create a separated VxLAN on the control and compute >> > nodes like: >> > >> >> > >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group >> > 239.1.1.1 >> > >> dev vlan3535 ttl 5 >> > >> # ip addr add 172.16.1.11/20 dev >> vxoctavia >> > >> # ip link set vxoctavia up >> > >> >> > >> and use it like a flat provider network, true? >> > >> >> > > This is a fine way of doing things, but it's only half the >> battle. >> > > You'll need to add a bridge on the compute nodes and bind it to >> that >> > > new interface. Something like this if you're using openvswitch: >> > > >> > > docker exec openvswitch_db >> > > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt >> vxoctavia >> > > >> > > Also you'll want to remove the IP address from that interface as >> it's >> > > going to be a bridge. Think of it like your public (br-ex) >> interface >> > > on your network nodes. >> > > >> > > From there you'll need to update the bridge mappings via kolla >> > > overrides. This would usually be in /etc/kolla/config/neutron. >> Create >> > > a subdirectory for your compute inventory group and create an >> > > ml2_conf.ini there. So you'd end up with something like: >> > > >> > > [root at kolla-deploy ~]# cat >> > /etc/kolla/config/neutron/compute/ml2_conf.ini >> > > [ml2_type_flat] >> > > flat_networks = mgmt-net >> > > >> > > [ovs] >> > > bridge_mappings = mgmt-net:br-mgmt >> > > >> > > run kolla-ansible --tags neutron reconfigure to push out the new >> > > configs. Note that there is a bug where the neutron containers >> > may not >> > > restart after the change, so you'll probably need to do a 'docker >> > > container restart neutron_openvswitch_agent' on each compute >> node. >> > > >> > > At this point, you'll need to create the provider network in the >> > admin >> > > project like: >> > > >> > > openstack network create --provider-network-type flat >> > > --provider-physical-network mgmt-net lb-mgmt-net >> > > >> > > And then create a normal subnet attached to this network with >> some >> > > largeish address scope. I wouldn't use 172.16.0.0/16 >> > because docker >> > > uses that by default. I'm not sure if it matters since the >> network >> > > traffic will be isolated on a bridge, but it makes me paranoid >> so I >> > > avoided it. >> > > >> > > For your controllers, I think you can just let everything >> > function off >> > > your api interface since you're routing in your spines. Set up a >> > > gateway somewhere from that lb-mgmt network and save yourself the >> > > complication of adding an interface to your controllers. If you >> > choose >> > > to use a separate interface on your controllers, you'll need to >> make >> > > sure this patch is in your kolla-ansible install or cherry pick >> it. >> > > >> > > >> > >> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 >> > > >> > > I don't think that's been backported at all, so unless you're >> running >> > > off master you'll need to go get it. >> > > >> > > From here on out, the regular Octavia instruction should serve >> you. >> > > Create a flavor, Create a security group, and capture their UUIDs >> > > along with the UUID of the provider network you made. Override >> > them in >> > > globals.yml with: >> > > >> > > octavia_amp_boot_network_list: >> > > octavia_amp_secgroup_list: >> > > octavia_amp_flavor_id: >> > > >> > > This is all from my scattered notes and bad memory. Hopefully it >> > makes >> > > sense. Corrections welcome. >> > > >> > > -Erik >> > > >> > > >> > > >> > >> >> > >> >> > >>> >> > >>> -Erik >> > >>>> >> > >>>> All the best, >> > >>>> Florian >> > >>>> _______________________________________________ >> > >>>> OpenStack-operators mailing list >> > >>>> OpenStack-operators at lists.openstack.org >> > >> > >>>> >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > >> > -- >> > >> > EveryWare AG >> > Florian Engelmann >> > Systems Engineer >> > Zurlindenstrasse 52a >> > CH-8003 Zürich >> > >> > tel: +41 44 466 60 00 >> > fax: +41 44 466 60 10 >> > mail: mailto:florian.engelmann at everyware.ch >> > >> > web: http://www.everyware.ch >> > >> >> -- >> >> EveryWare AG >> Florian Engelmann >> Systems Engineer >> Zurlindenstrasse 52a >> CH-8003 Zürich >> >> tel: +41 44 466 60 00 >> fax: +41 44 466 60 10 >> mail: mailto:florian.engelmann at everyware.ch >> web: http://www.everyware.ch >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Oct 24 20:06:24 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 24 Oct 2018 15:06:24 -0500 Subject: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> Message-ID: On Wed, Oct 24, 2018 at 2:49 PM Jay Pipes wrote: > On 10/24/2018 02:57 PM, Matt Riedemann wrote: > > On 10/24/2018 10:10 AM, Jay Pipes wrote: > >> I'd like to propose deprecating this API and getting rid of this > >> functionality since it conflicts with the new Keystone /limits > >> endpoint, is highly coupled with RAX's turnstile middleware and I > >> can't seem to find anyone who has ever used it. Deprecating this API > >> and functionality would make the transition to a saner quota > >> management system much easier and straightforward. > > > > I was trying to do this before it was cool: > > > > https://review.openstack.org/#/c/411035/ > > > > I think it was the Pike PTG in ATL where people said, "meh, let's just > > wait for unified limits from keystone and let this rot on the vine". > > > > I'd be happy to restore and update that spec. > > ++ > > I think partly things have stalled out because maybe each side (keystone > + nova) think the other is working on something but isn't? > I have a Post-it on my montior to follow up with what we talked about at the PTG. AFAIK, the next steps were to use the examples we went through and apply them to nova [0] using oslo.limit. We were hoping this would do two things. First, it would expose any remaining gaps we have in oslo.limit that need to get closed before other services start using the library. Second, we could iterate on the example in gerrit as a nova review and making it easier to merge when it's working. Is that still the case and if so, how can I help? [0] https://gist.github.com/lbragstad/69d28dca8adfa689c00b272d6db8bde7 > > I'm currently working on cleaning up the quota system and would be happy > to deprecate the os-quota-classes API along with the patch series that > does that cleanup. > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Oct 24 20:33:10 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 24 Oct 2018 15:33:10 -0500 Subject: [Openstack-operators] [nova] Removing the CachingScheduler In-Reply-To: References: Message-ID: <2b8c28fc-dcdb-54b1-888a-e579bd2a8855@gmail.com> On 10/18/2018 5:07 PM, Matt Riedemann wrote: > It's been deprecated since Pike, and the time has come to remove it [1]. > > mgagne has been the most vocal CachingScheduler operator I know and he > has tested out the "nova-manage placement heal_allocations" CLI, added > in Rocky, and said it will work for migrating his deployment from the > CachingScheduler to the FilterScheduler + Placement. > > If you are using the CachingScheduler and have a problem with its > removal, now is the time to speak up or forever hold your peace. > > [1] https://review.openstack.org/#/c/611723/1 This is your last chance to speak up if you are using the CachingScheduler and object to it being removed from nova in Stein. I have removed the -W pin from the review since a series of feature work is now stacked on top of it. -- Thanks, Matt From dinesh.bhor at linecorp.com Thu Oct 25 05:12:51 2018 From: dinesh.bhor at linecorp.com (=?utf-8?B?44Oc44O844Ki44OH44Kj44ON44K344OlW0Job3IgRGluZXNoXQ==?=) Date: Thu, 25 Oct 2018 14:12:51 +0900 Subject: [Openstack-operators] =?utf-8?b?W29wZW5zdGFjay1kZXZdIFtub3ZhXVts?= =?utf-8?q?imits=5D_Does_ANYONE_at_all_use_the_quota_class_function?= =?utf-8?q?ality_in_Nova=3F?= In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: Hi All, We were having a similar use case like *Preemptible Instances* called as *Rich-VM’s* which are high in resources and are deployed each per hypervisor. We have a custom code in production which tracks the quota for such instances separately and for the same reason we have *rich_instances* custom quota class same as *instances* quota class. I discussed this thing pretty recently with sean-k-mooney I hope he remembers it. ボーアディネシュ Bhor Dinesh Verda2チーム 〒160-0022 東京都新宿区新宿4-1-6 JR新宿ミライナタワー 23階 Mobile 08041289520 Fax 03-4316-2116 Email dinesh.bhor at linecorp.com ​ -----Original Message----- From: "Kevin L. Mitchell" To: "OpenStack Development Mailing List (not for usage questions)"; "openstack-operators at lists.openstack.org"; Cc: Sent: Oct 25, 2018 (Thu) 11:35:08 Subject: Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? > On 10/24/18 10:10, Jay Pipes wrote: > > Nova's API has the ability to create "quota classes", which are > > basically limits for a set of resource types. There is something called > > the "default quota class" which corresponds to the limits in the > > CONF.quota section. Quota classes are basically templates of limits to > > be applied if the calling project doesn't have any stored > > project-specific limits. For the record, my original concept in creating quota classes is that you'd be able to set quotas per tier of user and easily be able to move users from one tier to another. This was just a neat idea I had, and AFAIK, Rackspace never used it, so you can call it YAGNI as far as I'm concerned :) > > Has anyone ever created a quota class that is different from "default"? > > > > I'd like to propose deprecating this API and getting rid of this > > functionality since it conflicts with the new Keystone /limits endpoint, > > is highly coupled with RAX's turnstile middleware I didn't intend it to be highly coupled, but it's been a while since I wrote it, and of course I've matured as a developer since then, so *shrug*. I also don't think Rackspace has ever used turnstile. > > and I can't seem to > > find anyone who has ever used it. Deprecating this API and functionality > > would make the transition to a saner quota management system much easier > > and straightforward. I'm fine with that plan, speaking as the original developer; as I say, I don't think Rackspace ever utilized the functionality anyway, and if no one else pipes up saying that they're using it, I'd be all over deprecating the quota classes in favor of the new hotness. -- Kevin L. Mitchell __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Oct 25 06:10:12 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 24 Oct 2018 23:10:12 -0700 Subject: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> Message-ID: On Wed, 24 Oct 2018 12:54:00 -0700, Melanie Witt wrote: > On Wed, 24 Oct 2018 13:57:05 -0500, Matt Riedemann wrote: >> On 10/24/2018 10:10 AM, Jay Pipes wrote: >>> I'd like to propose deprecating this API and getting rid of this >>> functionality since it conflicts with the new Keystone /limits endpoint, >>> is highly coupled with RAX's turnstile middleware and I can't seem to >>> find anyone who has ever used it. Deprecating this API and functionality >>> would make the transition to a saner quota management system much easier >>> and straightforward. >> I was trying to do this before it was cool: >> >> https://review.openstack.org/#/c/411035/ >> >> I think it was the Pike PTG in ATL where people said, "meh, let's just >> wait for unified limits from keystone and let this rot on the vine". >> >> I'd be happy to restore and update that spec. > > Yeah, we were thinking the presence of the API and code isn't harming > anything and sometimes we talk about situations where we could use them. > > Quota classes come up occasionally whenever we talk about preemptible > instances. Example: we could create and use a quota class "preemptible" > and decorate preemptible flavors with that quota_class in order to give > them unlimited quota. There's also talk of quota classes in the "Count > quota based on resource class" spec [1] where we could have leveraged > quota classes to create and enforce quota limits per custom resource > class. But I think the consensus there was to hold off on quota by > custom resource class until we migrate to unified limits and oslo.limit. > > So, I think my concern in removing the internal code that is capable of > enforcing quota limit per quota class is the preemptible instance use > case. I don't have my mind wrapped around if/how we could solve it using > unified limits yet. > > And I was just thinking, if we added a project_id column to the > quota_classes table and correspondingly added it to the > os-quota-class-sets API, we could pretty simply implement quota by > flavor, which is a feature operators like Oath need. An operator could > create a quota class limit per project_id and then decorate flavors with > quota_class to enforce them per flavor. > > I recognize that maybe it would be too confusing to solve use cases with > quota classes given that we're going to migrate to united limits. At the > same time, I'm hesitant to close the door on a possibility before we > have some idea about how we'll solve them without quota classes. Has > anyone thought about how we can solve the use cases with unified limits > for things like preemptible instances and quota by flavor? > > [1] https://review.openstack.org/56901 After I sent this, I realized that I _have_ thought about how to solve these use cases with unified limits before and commented about it on the "Count quota based on resource class" spec some months ago. For preemptible instances, we could leverage registered limits in keystone [2] (registered limits span across all projects) by creating a limit with resource_name='preemptible', for example. Then we could decorate a flavor with quota_resource_name='preemptible' which would designate a preemptible instance type. Then we use the quota_resource_name from the flavor to check the quota for the corresponding registered limit in keystone. This way, preemptible instances can be assigned their own special quota (probably unlimited). And for quota by flavor, same concept. I think we could use registered limits and project limits [3] by creating limits with resource_name='flavorX', for example. We could decorate flavors with quota_resource_name='flavorX' and check quota for special quota for flavorX. Unified limits provide all of the same ability as quota classes, as far as I can tell. Given that, I think we are OK to deprecate quota classes. Cheers, -melanie [2] https://developer.openstack.org/api-ref/identity/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-registered-limits [3] https://developer.openstack.org/api-ref/identit/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-limits From melwittt at gmail.com Thu Oct 25 06:14:40 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 24 Oct 2018 23:14:40 -0700 Subject: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: On Thu, 25 Oct 2018 14:12:51 +0900, ボーアディネシュ[bhor Dinesh] wrote: > We were having a similar use case like *Preemptible Instances* called as > *Rich-VM’s* which > > are high in resources and are deployed each per hypervisor. We have a > custom code in > > production which tracks the quota for such instances separately and for > the same reason > > we have *rich_instances* custom quota class same as *instances* quota class. Please see the last reply I recently sent on this thread. I have been thinking the same as you about how we could use quota classes to implement the quota piece of preemptible instances. I think we can achieve the same thing using unified limits, specifically registered limits [1], which span across all projects. So, I think we are covered moving forward with migrating to unified limits and deprecation of quota classes. Let me know if you spot any issues with this idea. Cheers, -melanie [1] https://developer.openstack.org/api-ref/identity/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-registered-limits From florian.engelmann at everyware.ch Thu Oct 25 08:03:45 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Thu, 25 Oct 2018 10:03:45 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> <1540409584381.85393@everyware.ch> Message-ID: <016437c7-1319-96ed-614c-5f45c5672748@everyware.ch> Hmm - so right now I can't see any routed option because: The gateway connected to the VLAN provider networks (bond1 on the network nodes) is not able to route any traffic to my control nodes in the spine-leaf layer3 backend network. And right now there is no br-ex at all nor any "streched" L2 domain connecting all compute nodes. So the only solution I can think of right now is to create an overlay VxLAN in the spine-leaf backend network, connect all compute and control nodes to this overlay L2 network, create a OVS bridge connected to that network on the compute nodes and allow the Amphorae to get an IPin this network as well. Not to forget about DHCP... so the network nodes will need this bridge as well. Am 10/24/18 um 10:01 PM schrieb Erik McCormick: > > > On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian > > > wrote: > > On the network nodes we've got a dedicated interface to deploy VLANs > (like the provider network for internet access). What about creating > another VLAN on the network nodes, give that bridge a IP which is > part of the subnet of lb-mgmt-net and start the octavia worker, > healthmanager and controller on the network nodes binding to that IP? > > The problem with that is you can't out an IP in the vlan interface and > also use it as an OVS bridge, so the Octavia processes would have > nothing to bind to. > > > ------------------------------------------------------------------------ > *From:* Erik McCormick > > *Sent:* Wednesday, October 24, 2018 6:18 PM > *To:* Engelmann Florian > *Cc:* openstack-operators > *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and > VxLAN without DVR > > > On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann > > wrote: > > Am 10/24/18 um 2:08 PM schrieb Erik McCormick: > > > > > > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann > > > >> > > wrote: > > > >     Ohoh - thank you for your empathy :) > >     And those great details about how to setup this mgmt network. > >     I will try to do so this afternoon but solving that > routing "puzzle" > >     (virtual network to control nodes) I will need our > network guys to help > >     me out... > > > >     But I will need to tell all Amphorae a static route to > the gateway that > >     is routing to the control nodes? > > > > > > Just set the default gateway when you create the neutron > subnet. No need > > for excess static routes. The route on the other connection > won't > > interfere with it as it lives in a namespace. > > > My compute nodes have no br-ex and there is no L2 domain spread > over all > compute nodes. As far as I understood lb-mgmt-net is a provider > network > and has to be flat or VLAN and will need a "physical" gateway > (as there > is no virtual router). > So the question - is it possible to get octavia up and running > without a > br-ex (L2 domain spread over all compute nodes) on the compute > nodes? > > > Sorry, I only meant it was *like* br-ex on your network nodes. You > don't need that on your computes. > > The router here would be whatever does routing in your physical > network. Setting the gateway in the neutron subnet simply adds that > to the DHCP information sent to the amphorae. > > This does bring up another thingI forgot  though. You'll probably > want to add the management network / bridge to your network nodes or > wherever you run the DHCP agents. When you create the subnet, be > sure to leave some space in the address scope for the physical > devices with static IPs. > > As for multiple L2 domains, I can't think of a way to go about that > for the lb-mgmt network. It's a single network with a single subnet. > Perhaps you could limit load balancers to an AZ in a single rack? > Seems not very HA friendly. > > > > > > > > > > >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick: > >      > So in your other email you said asked if there was a > guide for > >      > deploying it with Kolla ansible... > >      > > >      > Oh boy. No there's not. I don't know if you've seen my > recent > >     mails on > >      > Octavia, but I am going through this deployment > process with > >      > kolla-ansible right now and it is lacking in a few areas. > >      > > >      > If you plan to use different CA certificates for > client and server in > >      > Octavia, you'll need to add that into the playbook. > Presently it only > >      > copies over ca_01.pem, cacert.key, and client.pem and > uses them for > >      > everything. I was completely unable to make it work > with only one CA > >      > as I got some SSL errors. It passes gate though, so I > aasume it must > >      > work? I dunno. > >      > > >      > Networking comments and a really messy kolla-ansible / > octavia > >     how-to below... > >      > > >      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann > >      > > >      >> wrote: > >      >> > >      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: > >      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann > >      >>> > >      >> wrote: > >      >>>> > >      >>>> Hi, > >      >>>> > >      >>>> We did test Octavia with Pike (DVR deployment) and > everything was > >      >>>> working right our of the box. We changed our > underlay network to a > >      >>>> Layer3 spine-leaf network now and did not deploy > DVR as we > >     don't wanted > >      >>>> to have that much cables in a rack. > >      >>>> > >      >>>> Octavia is not working right now as the lb-mgmt-net > does not > >     exist on > >      >>>> the compute nodes nor does a br-ex. > >      >>>> > >      >>>> The control nodes running > >      >>>> > >      >>>> octavia_worker > >      >>>> octavia_housekeeping > >      >>>> octavia_health_manager > >      >>>> octavia_api > >      >>>> > >     Amphorae-VMs, z.b. > > > >     lb-mgmt-net 172.16.0.0/16 > default GW > >      >>>> and as far as I understood octavia_worker, > >     octavia_housekeeping and > >      >>>> octavia_health_manager have to talk to the amphora > instances. > >     But the > >      >>>> control nodes are spread over three different > leafs. So each > >     control > >      >>>> node in a different L2 domain. > >      >>>> > >      >>>> So the question is how to deploy a lb-mgmt-net > network in our > >     setup? > >      >>>> > >      >>>> - Compute nodes have no "stretched" L2 domain > >      >>>> - Control nodes, compute nodes and network nodes > are in L3 > >     networks like > >      >>>> api, storage, ... > >      >>>> - Only network nodes are connected to a L2 domain > (with a > >     separated NIC) > >      >>>> providing the "public" network > >      >>>> > >      >>> You'll need to add a new bridge to your compute > nodes and create a > >      >>> provider network associated with that bridge. In my > setup this is > >      >>> simply a flat network tied to a tagged interface. In > your case it > >      >>> probably makes more sense to make a new VNI and > create a vxlan > >      >>> provider network. The routing in your switches > should handle > >     the rest. > >      >> > >      >> Ok that's what I try right now. But I don't get how > to setup > >     something > >      >> like a VxLAN provider Network. I thought only vlan > and flat is > >     supported > >      >> as provider network? I guess it is not possible to > use the tunnel > >      >> interface that is used for tenant networks? > >      >> So I have to create a separated VxLAN on the control > and compute > >     nodes like: > >      >> > >      >> # ip link add vxoctavia type vxlan id 42 dstport 4790 > group > >     239.1.1.1 > >      >> dev vlan3535 ttl 5 > >      >> # ip addr add 172.16.1.11/20 > dev vxoctavia > >      >> # ip link set vxoctavia up > >      >> > >      >> and use it like a flat provider network, true? > >      >> > >      > This is a fine way of doing things, but it's only half > the battle. > >      > You'll need to add a bridge on the compute nodes and > bind it to that > >      > new interface. Something like this if you're using > openvswitch: > >      > > >      > docker exec openvswitch_db > >      > /usr/local/bin/kolla_ensure_openvswitch_configured > br-mgmt vxoctavia > >      > > >      > Also you'll want to remove the IP address from that > interface as it's > >      > going to be a bridge. Think of it like your public > (br-ex) interface > >      > on your network nodes. > >      > > >      >  From there you'll need to update the bridge mappings > via kolla > >      > overrides. This would usually be in > /etc/kolla/config/neutron. Create > >      > a subdirectory for your compute inventory group and > create an > >      > ml2_conf.ini there. So you'd end up with something like: > >      > > >      > [root at kolla-deploy ~]# cat > >     /etc/kolla/config/neutron/compute/ml2_conf.ini > >      > [ml2_type_flat] > >      > flat_networks = mgmt-net > >      > > >      > [ovs] > >      > bridge_mappings = mgmt-net:br-mgmt > >      > > >      > run kolla-ansible --tags neutron reconfigure to push > out the new > >      > configs. Note that there is a bug where the neutron > containers > >     may not > >      > restart after the change, so you'll probably need to > do a 'docker > >      > container restart neutron_openvswitch_agent' on each > compute node. > >      > > >      > At this point, you'll need to create the provider > network in the > >     admin > >      > project like: > >      > > >      > openstack network create --provider-network-type flat > >      > --provider-physical-network mgmt-net lb-mgmt-net > >      > > >      > And then create a normal subnet attached to this > network with some > >      > largeish address scope. I wouldn't use 172.16.0.0/16 > > >      because docker > >      > uses that by default. I'm not sure if it matters since > the network > >      > traffic will be isolated on a bridge, but it makes me > paranoid so I > >      > avoided it. > >      > > >      > For your controllers, I think you can just let everything > >     function off > >      > your api interface since you're routing in your > spines. Set up a > >      > gateway somewhere from that lb-mgmt network and save > yourself the > >      > complication of adding an interface to your > controllers. If you > >     choose > >      > to use a separate interface on your controllers, > you'll need to make > >      > sure this patch is in your kolla-ansible install or > cherry pick it. > >      > > >      > > > > https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 > >      > > >      > I don't think that's been backported at all, so unless > you're running > >      > off master you'll need to go get it. > >      > > >      >  From here on out, the regular Octavia instruction > should serve you. > >      > Create a flavor, Create a security group, and capture > their UUIDs > >      > along with the UUID of the provider network you made. > Override > >     them in > >      > globals.yml with: > >      > > >      > octavia_amp_boot_network_list: > >      > octavia_amp_secgroup_list: > >      > octavia_amp_flavor_id: > >      > > >      > This is all from my scattered notes and bad memory. > Hopefully it > >     makes > >      > sense. Corrections welcome. > >      > > >      > -Erik > >      > > >      > > >      > > >      >> > >      >> > >      >>> > >      >>> -Erik > >      >>>> > >      >>>> All the best, > >      >>>> Florian > >      >>>> _______________________________________________ > >      >>>> OpenStack-operators mailing list > >      >>>> OpenStack-operators at lists.openstack.org > > >      > > >      >>>> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > >     -- > > > >     EveryWare AG > >     Florian Engelmann > >     Systems Engineer > >     Zurlindenstrasse 52a > >     CH-8003 Zürich > > > >     tel: +41 44 466 60 00 > >     fax: +41 44 466 60 10 > >     mail: mailto:florian.engelmann at everyware.ch > > >      > > >     web: http://www.everyware.ch > > > > -- > > EveryWare AG > Florian Engelmann > Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > > web: http://www.everyware.ch > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From florian.engelmann at everyware.ch Thu Oct 25 08:22:17 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Thu, 25 Oct 2018 10:22:17 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: <016437c7-1319-96ed-614c-5f45c5672748@everyware.ch> References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> <1540409584381.85393@everyware.ch> <016437c7-1319-96ed-614c-5f45c5672748@everyware.ch> Message-ID: <3fbb7ddb-91d6-b238-3ac4-8259e938ae53@everyware.ch> Or could I create lb-mgmt-net as VxLAN and connect the control nodes to this VxLAN? How to do something like that? Am 10/25/18 um 10:03 AM schrieb Florian Engelmann: > Hmm - so right now I can't see any routed option because: > > The gateway connected to the VLAN provider networks (bond1 on the > network nodes) is not able to route any traffic to my control nodes in > the spine-leaf layer3 backend network. > > And right now there is no br-ex at all nor any "streched" L2 domain > connecting all compute nodes. > > > So the only solution I can think of right now is to create an overlay > VxLAN in the spine-leaf backend network, connect all compute and control > nodes to this overlay L2 network, create a OVS bridge connected to that > network on the compute nodes and allow the Amphorae to get an IPin this > network as well. > Not to forget about DHCP... so the network nodes will need this bridge > as well. > > Am 10/24/18 um 10:01 PM schrieb Erik McCormick: >> >> >> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian >> > > wrote: >> >>     On the network nodes we've got a dedicated interface to deploy VLANs >>     (like the provider network for internet access). What about creating >>     another VLAN on the network nodes, give that bridge a IP which is >>     part of the subnet of lb-mgmt-net and start the octavia worker, >>     healthmanager and controller on the network nodes binding to that IP? >> >> The problem with that is you can't out an IP in the vlan interface and >> also use it as an OVS bridge, so the Octavia processes would have >> nothing to bind to. >> >> >> >> ------------------------------------------------------------------------ >>     *From:* Erik McCormick >     > >>     *Sent:* Wednesday, October 24, 2018 6:18 PM >>     *To:* Engelmann Florian >>     *Cc:* openstack-operators >>     *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and >>     VxLAN without DVR >> >> >>     On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann >>     >     > wrote: >> >>         Am 10/24/18 um 2:08 PM schrieb Erik McCormick: >>          > >>          > >>          > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann >>          > >         >>         >         >> >>          > wrote: >>          > >>          >     Ohoh - thank you for your empathy :) >>          >     And those great details about how to setup this mgmt >> network. >>          >     I will try to do so this afternoon but solving that >>         routing "puzzle" >>          >     (virtual network to control nodes) I will need our >>         network guys to help >>          >     me out... >>          > >>          >     But I will need to tell all Amphorae a static route to >>         the gateway that >>          >     is routing to the control nodes? >>          > >>          > >>          > Just set the default gateway when you create the neutron >>         subnet. No need >>          > for excess static routes. The route on the other connection >>         won't >>          > interfere with it as it lives in a namespace. >> >> >>         My compute nodes have no br-ex and there is no L2 domain spread >>         over all >>         compute nodes. As far as I understood lb-mgmt-net is a provider >>         network >>         and has to be flat or VLAN and will need a "physical" gateway >>         (as there >>         is no virtual router). >>         So the question - is it possible to get octavia up and running >>         without a >>         br-ex (L2 domain spread over all compute nodes) on the compute >>         nodes? >> >> >>     Sorry, I only meant it was *like* br-ex on your network nodes. You >>     don't need that on your computes. >> >>     The router here would be whatever does routing in your physical >>     network. Setting the gateway in the neutron subnet simply adds that >>     to the DHCP information sent to the amphorae. >> >>     This does bring up another thingI forgot  though. You'll probably >>     want to add the management network / bridge to your network nodes or >>     wherever you run the DHCP agents. When you create the subnet, be >>     sure to leave some space in the address scope for the physical >>     devices with static IPs. >> >>     As for multiple L2 domains, I can't think of a way to go about that >>     for the lb-mgmt network. It's a single network with a single subnet. >>     Perhaps you could limit load balancers to an AZ in a single rack? >>     Seems not very HA friendly. >> >> >> >>          > >>          > >>          > >>          >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick: >>          >      > So in your other email you said asked if there was a >>         guide for >>          >      > deploying it with Kolla ansible... >>          >      > >>          >      > Oh boy. No there's not. I don't know if you've seen my >>         recent >>          >     mails on >>          >      > Octavia, but I am going through this deployment >>         process with >>          >      > kolla-ansible right now and it is lacking in a few >> areas. >>          >      > >>          >      > If you plan to use different CA certificates for >>         client and server in >>          >      > Octavia, you'll need to add that into the playbook. >>         Presently it only >>          >      > copies over ca_01.pem, cacert.key, and client.pem and >>         uses them for >>          >      > everything. I was completely unable to make it work >>         with only one CA >>          >      > as I got some SSL errors. It passes gate though, so I >>         aasume it must >>          >      > work? I dunno. >>          >      > >>          >      > Networking comments and a really messy kolla-ansible / >>         octavia >>          >     how-to below... >>          >      > >>          >      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann >>          >      > >         >>          >     >         >> wrote: >>          >      >> >>          >      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: >>          >      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann >>          >      >>> >         >>          >     >         >> wrote: >>          >      >>>> >>          >      >>>> Hi, >>          >      >>>> >>          >      >>>> We did test Octavia with Pike (DVR deployment) and >>         everything was >>          >      >>>> working right our of the box. We changed our >>         underlay network to a >>          >      >>>> Layer3 spine-leaf network now and did not deploy >>         DVR as we >>          >     don't wanted >>          >      >>>> to have that much cables in a rack. >>          >      >>>> >>          >      >>>> Octavia is not working right now as the lb-mgmt-net >>         does not >>          >     exist on >>          >      >>>> the compute nodes nor does a br-ex. >>          >      >>>> >>          >      >>>> The control nodes running >>          >      >>>> >>          >      >>>> octavia_worker >>          >      >>>> octavia_housekeeping >>          >      >>>> octavia_health_manager >>          >      >>>> octavia_api >>          >      >>>> >>          >     Amphorae-VMs, z.b. >>          > >>          >     lb-mgmt-net 172.16.0.0/16 >>         default GW >>          >      >>>> and as far as I understood octavia_worker, >>          >     octavia_housekeeping and >>          >      >>>> octavia_health_manager have to talk to the amphora >>         instances. >>          >     But the >>          >      >>>> control nodes are spread over three different >>         leafs. So each >>          >     control >>          >      >>>> node in a different L2 domain. >>          >      >>>> >>          >      >>>> So the question is how to deploy a lb-mgmt-net >>         network in our >>          >     setup? >>          >      >>>> >>          >      >>>> - Compute nodes have no "stretched" L2 domain >>          >      >>>> - Control nodes, compute nodes and network nodes >>         are in L3 >>          >     networks like >>          >      >>>> api, storage, ... >>          >      >>>> - Only network nodes are connected to a L2 domain >>         (with a >>          >     separated NIC) >>          >      >>>> providing the "public" network >>          >      >>>> >>          >      >>> You'll need to add a new bridge to your compute >>         nodes and create a >>          >      >>> provider network associated with that bridge. In my >>         setup this is >>          >      >>> simply a flat network tied to a tagged interface. In >>         your case it >>          >      >>> probably makes more sense to make a new VNI and >>         create a vxlan >>          >      >>> provider network. The routing in your switches >>         should handle >>          >     the rest. >>          >      >> >>          >      >> Ok that's what I try right now. But I don't get how >>         to setup >>          >     something >>          >      >> like a VxLAN provider Network. I thought only vlan >>         and flat is >>          >     supported >>          >      >> as provider network? I guess it is not possible to >>         use the tunnel >>          >      >> interface that is used for tenant networks? >>          >      >> So I have to create a separated VxLAN on the control >>         and compute >>          >     nodes like: >>          >      >> >>          >      >> # ip link add vxoctavia type vxlan id 42 dstport 4790 >>         group >>          >     239.1.1.1 >>          >      >> dev vlan3535 ttl 5 >>          >      >> # ip addr add 172.16.1.11/20 >>         dev vxoctavia >>          >      >> # ip link set vxoctavia up >>          >      >> >>          >      >> and use it like a flat provider network, true? >>          >      >> >>          >      > This is a fine way of doing things, but it's only half >>         the battle. >>          >      > You'll need to add a bridge on the compute nodes and >>         bind it to that >>          >      > new interface. Something like this if you're using >>         openvswitch: >>          >      > >>          >      > docker exec openvswitch_db >>          >      > /usr/local/bin/kolla_ensure_openvswitch_configured >>         br-mgmt vxoctavia >>          >      > >>          >      > Also you'll want to remove the IP address from that >>         interface as it's >>          >      > going to be a bridge. Think of it like your public >>         (br-ex) interface >>          >      > on your network nodes. >>          >      > >>          >      >  From there you'll need to update the bridge mappings >>         via kolla >>          >      > overrides. This would usually be in >>         /etc/kolla/config/neutron. Create >>          >      > a subdirectory for your compute inventory group and >>         create an >>          >      > ml2_conf.ini there. So you'd end up with something >> like: >>          >      > >>          >      > [root at kolla-deploy ~]# cat >>          >     /etc/kolla/config/neutron/compute/ml2_conf.ini >>          >      > [ml2_type_flat] >>          >      > flat_networks = mgmt-net >>          >      > >>          >      > [ovs] >>          >      > bridge_mappings = mgmt-net:br-mgmt >>          >      > >>          >      > run kolla-ansible --tags neutron reconfigure to push >>         out the new >>          >      > configs. Note that there is a bug where the neutron >>         containers >>          >     may not >>          >      > restart after the change, so you'll probably need to >>         do a 'docker >>          >      > container restart neutron_openvswitch_agent' on each >>         compute node. >>          >      > >>          >      > At this point, you'll need to create the provider >>         network in the >>          >     admin >>          >      > project like: >>          >      > >>          >      > openstack network create --provider-network-type flat >>          >      > --provider-physical-network mgmt-net lb-mgmt-net >>          >      > >>          >      > And then create a normal subnet attached to this >>         network with some >>          >      > largeish address scope. I wouldn't use 172.16.0.0/16 >>         >>          >      because docker >>          >      > uses that by default. I'm not sure if it matters since >>         the network >>          >      > traffic will be isolated on a bridge, but it makes me >>         paranoid so I >>          >      > avoided it. >>          >      > >>          >      > For your controllers, I think you can just let >> everything >>          >     function off >>          >      > your api interface since you're routing in your >>         spines. Set up a >>          >      > gateway somewhere from that lb-mgmt network and save >>         yourself the >>          >      > complication of adding an interface to your >>         controllers. If you >>          >     choose >>          >      > to use a separate interface on your controllers, >>         you'll need to make >>          >      > sure this patch is in your kolla-ansible install or >>         cherry pick it. >>          >      > >>          >      > >>          > >> >> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 >> >>          >      > >>          >      > I don't think that's been backported at all, so unless >>         you're running >>          >      > off master you'll need to go get it. >>          >      > >>          >      >  From here on out, the regular Octavia instruction >>         should serve you. >>          >      > Create a flavor, Create a security group, and capture >>         their UUIDs >>          >      > along with the UUID of the provider network you made. >>         Override >>          >     them in >>          >      > globals.yml with: >>          >      > >>          >      > octavia_amp_boot_network_list: >>          >      > octavia_amp_secgroup_list: >>          >      > octavia_amp_flavor_id: >>          >      > >>          >      > This is all from my scattered notes and bad memory. >>         Hopefully it >>          >     makes >>          >      > sense. Corrections welcome. >>          >      > >>          >      > -Erik >>          >      > >>          >      > >>          >      > >>          >      >> >>          >      >> >>          >      >>> >>          >      >>> -Erik >>          >      >>>> >>          >      >>>> All the best, >>          >      >>>> Florian >>          >      >>>> _______________________________________________ >>          >      >>>> OpenStack-operators mailing list >>          >      >>>> OpenStack-operators at lists.openstack.org >>         >>          >     >         > >>          >      >>>> >>          > >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>          > >>          >     -- >>          > >>          >     EveryWare AG >>          >     Florian Engelmann >>          >     Systems Engineer >>          >     Zurlindenstrasse 52a >>          >     CH-8003 Zürich >>          > >>          >     tel: +41 44 466 60 00 >>          >     fax: +41 44 466 60 10 >>          >     mail: mailto:florian.engelmann at everyware.ch >>         >>          >     >         > >>          >     web: http://www.everyware.ch >>          > >> >>         -- >>         EveryWare AG >>         Florian Engelmann >>         Systems Engineer >>         Zurlindenstrasse 52a >>         CH-8003 Zürich >> >>         tel: +41 44 466 60 00 >>         fax: +41 44 466 60 10 >>         mail: mailto:florian.engelmann at everyware.ch >>         >>         web: http://www.everyware.ch >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From tobias.rydberg at citynetwork.eu Thu Oct 25 09:20:04 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 25 Oct 2018 11:20:04 +0200 Subject: [Openstack-operators] [publiccloud-wg] Reminder weekly meeting Public Cloud WG Message-ID: <1e16eefd-142b-9a3b-4812-baf463e1fa03@citynetwork.eu> Hi everyone, Time for a new meeting for PCWG - today 1400 UTC in #openstack-publiccloud! Agenda found at https://etherpad.openstack.org/p/publiccloud-wg Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From tobias.urdin at binero.se Thu Oct 25 14:00:06 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Thu, 25 Oct 2018 16:00:06 +0200 Subject: [Openstack-operators] [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: <3f27e1b3-1bce-dd31-d81a-5352ca900ccc@binero.se> References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> <3f27e1b3-1bce-dd31-d81a-5352ca900ccc@binero.se> Message-ID: <94c3392c-a4b3-6cfa-4b14-83818807f25a@binero.se> Might as well throw it out here. After a lot of troubleshooting we were able to narrow our issue down to our test environment running qemu virtualization, we moved our compute node to hardware and used kvm full virtualization instead. We could properly reproduce the issue where generating a CSR from a private key and then trying to verify the CSR would fail complaining about "Signature did not match the certificate request" We suspect qemu floating point emulation caused this, the same OpenSSL function that validates a CSR is the one used when validating the SSL handshake which caused our issue. After going through the whole stack, we have Octavia working flawlessly without any issues at all. Best regards Tobias On 10/23/2018 04:31 PM, Tobias Urdin wrote: > Hello Erik, > > Could you specify the DNs you used for all certificates just so that I > can rule it out on my side. > You can redact anything sensitive with some to just get the feel on how > it's configured. > > Best regards > Tobias > > On 10/22/2018 04:47 PM, Erik McCormick wrote: >> On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin wrote: >>> Hello, >>> >>> I've been having a lot of issues with SSL certificates myself, on my >>> second trip now trying to get it working. >>> >>> Before I spent a lot of time walking through every line in the DevStack >>> plugin and fixing my config options, used the generate >>> script [1] and still it didn't work. >>> >>> When I got the "invalid padding" issue it was because of the DN I used >>> for the CA and the certificate IIRC. >>> >>> > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect >>> to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa >>> routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), >>> ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), >>> ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) >>> > 19:47 < tobias-urdin> after a quick google "The problem was that my >>> CA DN was the same as the certificate DN." >>> >>> IIRC I think that solved it, but then again I wouldn't remember fully >>> since I've been at so many different angles by now. >>> >>> Here is my IRC logs history from the #openstack-lbaas channel, perhaps >>> it can help you out >>> http://paste.openstack.org/show/732575/ >>> >> Tobias, I owe you a beer. This was precisely the issue. I'm deploying >> Octavia with kolla-ansible. It only deploys a single CA. After hacking >> the templates and playbook to incorporate a separate server CA, the >> amphorae now load and provision the required namespace. I'm adding a >> kolla tag to the subject of this in hopes that someone might want to >> take on changing this behavior in the project. Hopefully after I get >> through Upstream Institute in Berlin I'll be able to do it myself if >> nobody else wants to do it. >> >> For certificate generation, I extracted the contents of >> octavia_certs_install.yml (which sets up the directory structure, >> openssl.cnf, and the client CA), and octavia_certs.yml (which creates >> the server CA and the client certificate) and mashed them into a >> separate playbook just for this purpose. At the end I get: >> >> ca_01.pem - Client CA Certificate >> ca_01.key - Client CA Key >> ca_server_01.pem - Server CA Certificate >> cakey.pem - Server CA Key >> client.pem - Concatenated Client Key and Certificate >> >> If it would help to have the playbook, I can stick it up on github >> with a huge "This is a hack" disclaimer on it. >> >>> ----- >>> >>> Sorry for hijacking the thread but I'm stuck as well. >>> >>> I've in the past tried to generate the certificates with [1] but now >>> moved on to using the openstack-ansible way of generating them [2] >>> with some modifications. >>> >>> Right now I'm just getting: Could not connect to instance. Retrying.: >>> SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) >>> from the amphoras, haven't got any further but I've eliminated a lot of >>> stuck in the middle. >>> >>> Tried deploying Ocatavia on Ubuntu with python3 to just make sure there >>> wasn't an issue with CentOS and OpenSSL versions since it tends to lag >>> behind. >>> Checking the amphora with openssl s_client [3] it gives the same one, >>> but the verification is successful just that I don't understand what the >>> bad signature >>> part is about, from browsing some OpenSSL code it seems to be related to >>> RSA signatures somehow. >>> >>> 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad >>> signature:s3_clnt.c:2032: >>> >>> So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS >>> (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm >>> back to something related >>> to the certificates or the communication between the endpoints, or what >>> actually responds inside the amphora (gunicorn IIUC?). Based on the >>> "verify" functions actually causing that bad signature error I would >>> assume it's the generated certificate that the amphora presents that is >>> causing it. >>> >>> I'll have to continue the troubleshooting to the inside of the amphora, >>> I've used the test-only amphora image before but have now built my own >>> one that is >>> using the amphora-agent from the actual stable branch, but same issue >>> (bad signature). >>> >>> For verbosity this is the config options set for the certificates in >>> octavia.conf and which file it was copied from [4], same here, a >>> replication of what openstack-ansible does. >>> >>> Appreciate any feedback or help :) >>> >>> Best regards >>> Tobias >>> >>> [1] >>> https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh >>> [2] http://paste.openstack.org/show/732483/ >>> [3] http://paste.openstack.org/show/732486/ >>> [4] http://paste.openstack.org/show/732487/ >>> >>> On 10/20/2018 01:53 AM, Michael Johnson wrote: >>>> Hi Erik, >>>> >>>> Sorry to hear you are still having certificate issues. >>>> >>>> Issue #2 is probably caused by issue #1. Since we hot-plug the tenant >>>> network for the VIP, one of the first steps after the worker connects >>>> to the amphora agent is finishing the required configuration of the >>>> VIP interface inside the network namespace on the amphroa. >>>> >> Thanks for the hint on the workflow of this. I hadn't gotten deep >> enough into the code to find that yet, but I suspected it was blocking >> since the namespace never got created either. Thanks >> >>>> If I remember correctly, you are attempting to configure Octavia with >>>> the dual CA option (which is good for non-development use). >>>> >>>> This is what I have for notes: >>>> >>>> [certificates] gets the following: >>>> cert_generator = local_cert_generator >>>> ca_certificate = server CA's "server.pem" file >>>> ca_private_key = server CA's "server.key" file >>>> ca_private_key_passphrase = pass phrase for ca_private_key >>>> [controller_worker] >>>> client_ca = Client CA's ca_cert file >>>> [haproxy_amphora] >>>> client_cert = Client CA's client.pem file (I think with it's key >>>> concatenated is what rm_work said the other day) >>>> server_ca = Server CA's ca_cert file >>>> >> This is all very helpful. It's a bit difficult to know what goes where >> the way the documentation is written presently. For something that's >> going to be the defacto standard for loadbalancing, we as a community >> need to do a better job of documenting how to set up, configure, and >> manage this in production. I'm trying to capture my lessons learned >> and processes as I go to help with that if I can. >> >> -Erik >> >>>> That said, I can probably run through this and write something up next >>>> week that is more step-by-step/detailed. >>>> >>>> Michael >>>> >>>> On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick >>>> wrote: >>>>> Apologies for cross-posting, but in the event that these might be >>>>> worth filing as bugs, I wanted the Octavia devs to see it as well... >>>>> >>>>> I've been wrestling with getting Octavia up and running and have >>>>> become stuck on two issues. I'm hoping someone has run into these >>>>> before. My google foo has come up empty. >>>>> >>>>> Issue 1: >>>>> When the Octavia controller tries to poll the amphora instance, it >>>>> tries repeatedly and eventually fails. The error on the controller >>>>> side is: >>>>> >>>>> 2018-10-19 14:17:39.181 26 ERROR >>>>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection >>>>> retries (currently set to 300) exhausted. The amphora is unavailable. >>>>> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries >>>>> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by >>>>> SSLError(SSLError("bad handshake: Error([('rsa routines', >>>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >>>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >>>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >>>>> 'tls_process_server_certificate', 'certificate verify >>>>> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', >>>>> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 >>>>> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', >>>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >>>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >>>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >>>>> 'tls_process_server_certificate', 'certificate verify >>>>> failed')],)",),)) >>>>> >>>>> On the amphora side I see: >>>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. >>>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from >>>>> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake >>>>> failure (_ssl.c:1754) >>>>> >>>>> I've generated certificates both with the script in the Octavia git >>>>> repo, and with the Openstack Ansible playbook. I can see that they are >>>>> present in /etc/octavia/certs. >>>>> >>>>> I'm using the Kolla (Queens) containers for the control plane so I'm >>>>> sure I've satisfied all the python library constraints. >>>>> >>>>> Issue 2: >>>>> I"m not sure how it gets configured, but the tenant network interface >>>>> (ens6) never comes up. I can spawn other instances on that network >>>>> with no issue, and I can see that Neutron has the port attached to the >>>>> instance. However, in the instance this is all I get: >>>>> >>>>> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a >>>>> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >>>>> group default qlen 1 >>>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >>>>> inet 127.0.0.1/8 scope host lo >>>>> valid_lft forever preferred_lft forever >>>>> inet6 ::1/128 scope host >>>>> valid_lft forever preferred_lft forever >>>>> 2: ens3: mtu 9000 qdisc pfifo_fast >>>>> state UP group default qlen 1000 >>>>> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff >>>>> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 >>>>> valid_lft forever preferred_lft forever >>>>> inet6 fe80::f816:3eff:fe30:c460/64 scope link >>>>> valid_lft forever preferred_lft forever >>>>> 3: ens6: mtu 1500 qdisc noop state DOWN group >>>>> default qlen 1000 >>>>> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff >>>>> >>>>> There's no evidence of the interface anywhere else including udev rules. >>>>> >>>>> Any help with either or both issues would be greatly appreciated. >>>>> >>>>> Cheers, >>>>> Erik >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From florian.engelmann at everyware.ch Thu Oct 25 14:39:05 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Thu, 25 Oct 2018 16:39:05 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: <3fbb7ddb-91d6-b238-3ac4-8259e938ae53@everyware.ch> References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> <1540409584381.85393@everyware.ch> <016437c7-1319-96ed-614c-5f45c5672748@everyware.ch> <3fbb7ddb-91d6-b238-3ac4-8259e938ae53@everyware.ch> Message-ID: <860d4115-e993-070f-fe3e-b81f21283859@everyware.ch> It looks like devstack implemented some o-hm0 interface to connect the physical control host to a VxLAN. In our case there is no VxLAN at the control nodes nor is OVS. Is it a option to deploy those Octavia services needing this conenction to the compute or network nodes and use o-hm0? Am 10/25/18 um 10:22 AM schrieb Florian Engelmann: > Or could I create lb-mgmt-net as VxLAN and connect the control nodes to > this VxLAN? How to do something like that? > > Am 10/25/18 um 10:03 AM schrieb Florian Engelmann: >> Hmm - so right now I can't see any routed option because: >> >> The gateway connected to the VLAN provider networks (bond1 on the >> network nodes) is not able to route any traffic to my control nodes in >> the spine-leaf layer3 backend network. >> >> And right now there is no br-ex at all nor any "streched" L2 domain >> connecting all compute nodes. >> >> >> So the only solution I can think of right now is to create an overlay >> VxLAN in the spine-leaf backend network, connect all compute and >> control nodes to this overlay L2 network, create a OVS bridge >> connected to that network on the compute nodes and allow the Amphorae >> to get an IPin this network as well. >> Not to forget about DHCP... so the network nodes will need this bridge >> as well. >> >> Am 10/24/18 um 10:01 PM schrieb Erik McCormick: >>> >>> >>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian >>> >> > wrote: >>> >>>     On the network nodes we've got a dedicated interface to deploy VLANs >>>     (like the provider network for internet access). What about creating >>>     another VLAN on the network nodes, give that bridge a IP which is >>>     part of the subnet of lb-mgmt-net and start the octavia worker, >>>     healthmanager and controller on the network nodes binding to that >>> IP? >>> >>> The problem with that is you can't out an IP in the vlan interface >>> and also use it as an OVS bridge, so the Octavia processes would have >>> nothing to bind to. >>> >>> >>> ------------------------------------------------------------------------ >>>     *From:* Erik McCormick >>     > >>>     *Sent:* Wednesday, October 24, 2018 6:18 PM >>>     *To:* Engelmann Florian >>>     *Cc:* openstack-operators >>>     *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and >>>     VxLAN without DVR >>> >>> >>>     On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann >>>     >>     > wrote: >>> >>>         Am 10/24/18 um 2:08 PM schrieb Erik McCormick: >>>          > >>>          > >>>          > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann >>>          > >>         >>>         >>         >> >>>          > wrote: >>>          > >>>          >     Ohoh - thank you for your empathy :) >>>          >     And those great details about how to setup this mgmt >>> network. >>>          >     I will try to do so this afternoon but solving that >>>         routing "puzzle" >>>          >     (virtual network to control nodes) I will need our >>>         network guys to help >>>          >     me out... >>>          > >>>          >     But I will need to tell all Amphorae a static route to >>>         the gateway that >>>          >     is routing to the control nodes? >>>          > >>>          > >>>          > Just set the default gateway when you create the neutron >>>         subnet. No need >>>          > for excess static routes. The route on the other connection >>>         won't >>>          > interfere with it as it lives in a namespace. >>> >>> >>>         My compute nodes have no br-ex and there is no L2 domain spread >>>         over all >>>         compute nodes. As far as I understood lb-mgmt-net is a provider >>>         network >>>         and has to be flat or VLAN and will need a "physical" gateway >>>         (as there >>>         is no virtual router). >>>         So the question - is it possible to get octavia up and running >>>         without a >>>         br-ex (L2 domain spread over all compute nodes) on the compute >>>         nodes? >>> >>> >>>     Sorry, I only meant it was *like* br-ex on your network nodes. You >>>     don't need that on your computes. >>> >>>     The router here would be whatever does routing in your physical >>>     network. Setting the gateway in the neutron subnet simply adds that >>>     to the DHCP information sent to the amphorae. >>> >>>     This does bring up another thingI forgot  though. You'll probably >>>     want to add the management network / bridge to your network nodes or >>>     wherever you run the DHCP agents. When you create the subnet, be >>>     sure to leave some space in the address scope for the physical >>>     devices with static IPs. >>> >>>     As for multiple L2 domains, I can't think of a way to go about that >>>     for the lb-mgmt network. It's a single network with a single subnet. >>>     Perhaps you could limit load balancers to an AZ in a single rack? >>>     Seems not very HA friendly. >>> >>> >>> >>>          > >>>          > >>>          > >>>          >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick: >>>          >      > So in your other email you said asked if there was a >>>         guide for >>>          >      > deploying it with Kolla ansible... >>>          >      > >>>          >      > Oh boy. No there's not. I don't know if you've seen my >>>         recent >>>          >     mails on >>>          >      > Octavia, but I am going through this deployment >>>         process with >>>          >      > kolla-ansible right now and it is lacking in a few >>> areas. >>>          >      > >>>          >      > If you plan to use different CA certificates for >>>         client and server in >>>          >      > Octavia, you'll need to add that into the playbook. >>>         Presently it only >>>          >      > copies over ca_01.pem, cacert.key, and client.pem and >>>         uses them for >>>          >      > everything. I was completely unable to make it work >>>         with only one CA >>>          >      > as I got some SSL errors. It passes gate though, so I >>>         aasume it must >>>          >      > work? I dunno. >>>          >      > >>>          >      > Networking comments and a really messy kolla-ansible / >>>         octavia >>>          >     how-to below... >>>          >      > >>>          >      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann >>>          >      > >>         >>>          >     >>         >> wrote: >>>          >      >> >>>          >      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: >>>          >      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann >>>          >      >>> >>         >>>          >     >>         >> wrote: >>>          >      >>>> >>>          >      >>>> Hi, >>>          >      >>>> >>>          >      >>>> We did test Octavia with Pike (DVR deployment) and >>>         everything was >>>          >      >>>> working right our of the box. We changed our >>>         underlay network to a >>>          >      >>>> Layer3 spine-leaf network now and did not deploy >>>         DVR as we >>>          >     don't wanted >>>          >      >>>> to have that much cables in a rack. >>>          >      >>>> >>>          >      >>>> Octavia is not working right now as the lb-mgmt-net >>>         does not >>>          >     exist on >>>          >      >>>> the compute nodes nor does a br-ex. >>>          >      >>>> >>>          >      >>>> The control nodes running >>>          >      >>>> >>>          >      >>>> octavia_worker >>>          >      >>>> octavia_housekeeping >>>          >      >>>> octavia_health_manager >>>          >      >>>> octavia_api >>>          >      >>>> >>>          >     Amphorae-VMs, z.b. >>>          > >>>          >     lb-mgmt-net 172.16.0.0/16 >>>         default GW >>>          >      >>>> and as far as I understood octavia_worker, >>>          >     octavia_housekeeping and >>>          >      >>>> octavia_health_manager have to talk to the amphora >>>         instances. >>>          >     But the >>>          >      >>>> control nodes are spread over three different >>>         leafs. So each >>>          >     control >>>          >      >>>> node in a different L2 domain. >>>          >      >>>> >>>          >      >>>> So the question is how to deploy a lb-mgmt-net >>>         network in our >>>          >     setup? >>>          >      >>>> >>>          >      >>>> - Compute nodes have no "stretched" L2 domain >>>          >      >>>> - Control nodes, compute nodes and network nodes >>>         are in L3 >>>          >     networks like >>>          >      >>>> api, storage, ... >>>          >      >>>> - Only network nodes are connected to a L2 domain >>>         (with a >>>          >     separated NIC) >>>          >      >>>> providing the "public" network >>>          >      >>>> >>>          >      >>> You'll need to add a new bridge to your compute >>>         nodes and create a >>>          >      >>> provider network associated with that bridge. In my >>>         setup this is >>>          >      >>> simply a flat network tied to a tagged interface. In >>>         your case it >>>          >      >>> probably makes more sense to make a new VNI and >>>         create a vxlan >>>          >      >>> provider network. The routing in your switches >>>         should handle >>>          >     the rest. >>>          >      >> >>>          >      >> Ok that's what I try right now. But I don't get how >>>         to setup >>>          >     something >>>          >      >> like a VxLAN provider Network. I thought only vlan >>>         and flat is >>>          >     supported >>>          >      >> as provider network? I guess it is not possible to >>>         use the tunnel >>>          >      >> interface that is used for tenant networks? >>>          >      >> So I have to create a separated VxLAN on the control >>>         and compute >>>          >     nodes like: >>>          >      >> >>>          >      >> # ip link add vxoctavia type vxlan id 42 dstport 4790 >>>         group >>>          >     239.1.1.1 >>>          >      >> dev vlan3535 ttl 5 >>>          >      >> # ip addr add 172.16.1.11/20 >>>         dev vxoctavia >>>          >      >> # ip link set vxoctavia up >>>          >      >> >>>          >      >> and use it like a flat provider network, true? >>>          >      >> >>>          >      > This is a fine way of doing things, but it's only half >>>         the battle. >>>          >      > You'll need to add a bridge on the compute nodes and >>>         bind it to that >>>          >      > new interface. Something like this if you're using >>>         openvswitch: >>>          >      > >>>          >      > docker exec openvswitch_db >>>          >      > /usr/local/bin/kolla_ensure_openvswitch_configured >>>         br-mgmt vxoctavia >>>          >      > >>>          >      > Also you'll want to remove the IP address from that >>>         interface as it's >>>          >      > going to be a bridge. Think of it like your public >>>         (br-ex) interface >>>          >      > on your network nodes. >>>          >      > >>>          >      >  From there you'll need to update the bridge mappings >>>         via kolla >>>          >      > overrides. This would usually be in >>>         /etc/kolla/config/neutron. Create >>>          >      > a subdirectory for your compute inventory group and >>>         create an >>>          >      > ml2_conf.ini there. So you'd end up with something >>> like: >>>          >      > >>>          >      > [root at kolla-deploy ~]# cat >>>          >     /etc/kolla/config/neutron/compute/ml2_conf.ini >>>          >      > [ml2_type_flat] >>>          >      > flat_networks = mgmt-net >>>          >      > >>>          >      > [ovs] >>>          >      > bridge_mappings = mgmt-net:br-mgmt >>>          >      > >>>          >      > run kolla-ansible --tags neutron reconfigure to push >>>         out the new >>>          >      > configs. Note that there is a bug where the neutron >>>         containers >>>          >     may not >>>          >      > restart after the change, so you'll probably need to >>>         do a 'docker >>>          >      > container restart neutron_openvswitch_agent' on each >>>         compute node. >>>          >      > >>>          >      > At this point, you'll need to create the provider >>>         network in the >>>          >     admin >>>          >      > project like: >>>          >      > >>>          >      > openstack network create --provider-network-type flat >>>          >      > --provider-physical-network mgmt-net lb-mgmt-net >>>          >      > >>>          >      > And then create a normal subnet attached to this >>>         network with some >>>          >      > largeish address scope. I wouldn't use 172.16.0.0/16 >>>         >>>          >      because docker >>>          >      > uses that by default. I'm not sure if it matters since >>>         the network >>>          >      > traffic will be isolated on a bridge, but it makes me >>>         paranoid so I >>>          >      > avoided it. >>>          >      > >>>          >      > For your controllers, I think you can just let >>> everything >>>          >     function off >>>          >      > your api interface since you're routing in your >>>         spines. Set up a >>>          >      > gateway somewhere from that lb-mgmt network and save >>>         yourself the >>>          >      > complication of adding an interface to your >>>         controllers. If you >>>          >     choose >>>          >      > to use a separate interface on your controllers, >>>         you'll need to make >>>          >      > sure this patch is in your kolla-ansible install or >>>         cherry pick it. >>>          >      > >>>          >      > >>>          > >>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 >>> >>>          >      > >>>          >      > I don't think that's been backported at all, so unless >>>         you're running >>>          >      > off master you'll need to go get it. >>>          >      > >>>          >      >  From here on out, the regular Octavia instruction >>>         should serve you. >>>          >      > Create a flavor, Create a security group, and capture >>>         their UUIDs >>>          >      > along with the UUID of the provider network you made. >>>         Override >>>          >     them in >>>          >      > globals.yml with: >>>          >      > >>>          >      > octavia_amp_boot_network_list: >>>          >      > octavia_amp_secgroup_list: >>>          >      > octavia_amp_flavor_id: >>>          >      > >>>          >      > This is all from my scattered notes and bad memory. >>>         Hopefully it >>>          >     makes >>>          >      > sense. Corrections welcome. >>>          >      > >>>          >      > -Erik >>>          >      > >>>          >      > >>>          >      > >>>          >      >> >>>          >      >> >>>          >      >>> >>>          >      >>> -Erik >>>          >      >>>> >>>          >      >>>> All the best, >>>          >      >>>> Florian >>>          >      >>>> _______________________________________________ >>>          >      >>>> OpenStack-operators mailing list >>>          >      >>>> OpenStack-operators at lists.openstack.org >>>         >>>          >     >>         > >>>          >      >>>> >>>          > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>          > >>>          >     -- >>>          > >>>          >     EveryWare AG >>>          >     Florian Engelmann >>>          >     Systems Engineer >>>          >     Zurlindenstrasse 52a >>>          >     CH-8003 Zürich >>>          > >>>          >     tel: +41 44 466 60 00 >>>          >     fax: +41 44 466 60 10 >>>          >     mail: mailto:florian.engelmann at everyware.ch >>>         >>>          >     >>         > >>>          >     web: http://www.everyware.ch >>>          > >>> >>>         -- >>>         EveryWare AG >>>         Florian Engelmann >>>         Systems Engineer >>>         Zurlindenstrasse 52a >>>         CH-8003 Zürich >>> >>>         tel: +41 44 466 60 00 >>>         fax: +41 44 466 60 10 >>>         mail: mailto:florian.engelmann at everyware.ch >>>         >>>         web: http://www.everyware.ch >>> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From Kevin.Fox at pnnl.gov Thu Oct 25 15:31:15 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 25 Oct 2018 15:31:15 +0000 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: <860d4115-e993-070f-fe3e-b81f21283859@everyware.ch> References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> <1540409584381.85393@everyware.ch> <016437c7-1319-96ed-614c-5f45c5672748@everyware.ch> <3fbb7ddb-91d6-b238-3ac4-8259e938ae53@everyware.ch>, <860d4115-e993-070f-fe3e-b81f21283859@everyware.ch> Message-ID: <1A3C52DFCD06494D8528644858247BF01C20F4A7@EX10MBOX03.pnnl.gov> Would it make sense to move the control plane for this piece into the cluster? (vm in a mangement tenant?) Thanks, Kevin ________________________________________ From: Florian Engelmann [florian.engelmann at everyware.ch] Sent: Thursday, October 25, 2018 7:39 AM To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR It looks like devstack implemented some o-hm0 interface to connect the physical control host to a VxLAN. In our case there is no VxLAN at the control nodes nor is OVS. Is it a option to deploy those Octavia services needing this conenction to the compute or network nodes and use o-hm0? Am 10/25/18 um 10:22 AM schrieb Florian Engelmann: > Or could I create lb-mgmt-net as VxLAN and connect the control nodes to > this VxLAN? How to do something like that? > > Am 10/25/18 um 10:03 AM schrieb Florian Engelmann: >> Hmm - so right now I can't see any routed option because: >> >> The gateway connected to the VLAN provider networks (bond1 on the >> network nodes) is not able to route any traffic to my control nodes in >> the spine-leaf layer3 backend network. >> >> And right now there is no br-ex at all nor any "streched" L2 domain >> connecting all compute nodes. >> >> >> So the only solution I can think of right now is to create an overlay >> VxLAN in the spine-leaf backend network, connect all compute and >> control nodes to this overlay L2 network, create a OVS bridge >> connected to that network on the compute nodes and allow the Amphorae >> to get an IPin this network as well. >> Not to forget about DHCP... so the network nodes will need this bridge >> as well. >> >> Am 10/24/18 um 10:01 PM schrieb Erik McCormick: >>> >>> >>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian >>> >> > wrote: >>> >>> On the network nodes we've got a dedicated interface to deploy VLANs >>> (like the provider network for internet access). What about creating >>> another VLAN on the network nodes, give that bridge a IP which is >>> part of the subnet of lb-mgmt-net and start the octavia worker, >>> healthmanager and controller on the network nodes binding to that >>> IP? >>> >>> The problem with that is you can't out an IP in the vlan interface >>> and also use it as an OVS bridge, so the Octavia processes would have >>> nothing to bind to. >>> >>> >>> ------------------------------------------------------------------------ >>> *From:* Erik McCormick >> > >>> *Sent:* Wednesday, October 24, 2018 6:18 PM >>> *To:* Engelmann Florian >>> *Cc:* openstack-operators >>> *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and >>> VxLAN without DVR >>> >>> >>> On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann >>> >> > wrote: >>> >>> Am 10/24/18 um 2:08 PM schrieb Erik McCormick: >>> > >>> > >>> > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann >>> > >> >>> >> >> >>> > wrote: >>> > >>> >  Ohoh - thank you for your empathy :) >>> >  And those great details about how to setup this mgmt >>> network. >>> >  I will try to do so this afternoon but solving that >>> routing "puzzle" >>> >  (virtual network to control nodes) I will need our >>> network guys to help >>> >  me out... >>> > >>> >  But I will need to tell all Amphorae a static route to >>> the gateway that >>> >  is routing to the control nodes? >>> > >>> > >>> > Just set the default gateway when you create the neutron >>> subnet. No need >>> > for excess static routes. The route on the other connection >>> won't >>> > interfere with it as it lives in a namespace. >>> >>> >>> My compute nodes have no br-ex and there is no L2 domain spread >>> over all >>> compute nodes. As far as I understood lb-mgmt-net is a provider >>> network >>> and has to be flat or VLAN and will need a "physical" gateway >>> (as there >>> is no virtual router). >>> So the question - is it possible to get octavia up and running >>> without a >>> br-ex (L2 domain spread over all compute nodes) on the compute >>> nodes? >>> >>> >>> Sorry, I only meant it was *like* br-ex on your network nodes. You >>> don't need that on your computes. >>> >>> The router here would be whatever does routing in your physical >>> network. Setting the gateway in the neutron subnet simply adds that >>> to the DHCP information sent to the amphorae. >>> >>> This does bring up another thingI forgot though. You'll probably >>> want to add the management network / bridge to your network nodes or >>> wherever you run the DHCP agents. When you create the subnet, be >>> sure to leave some space in the address scope for the physical >>> devices with static IPs. >>> >>> As for multiple L2 domains, I can't think of a way to go about that >>> for the lb-mgmt network. It's a single network with a single subnet. >>> Perhaps you could limit load balancers to an AZ in a single rack? >>> Seems not very HA friendly. >>> >>> >>> >>> > >>> > >>> > >>> >  Am 10/23/18 um 6:57 PM schrieb Erik McCormick: >>> >  > So in your other email you said asked if there was a >>> guide for >>> >  > deploying it with Kolla ansible... >>> >  > >>> >  > Oh boy. No there's not. I don't know if you've seen my >>> recent >>> >  mails on >>> >  > Octavia, but I am going through this deployment >>> process with >>> >  > kolla-ansible right now and it is lacking in a few >>> areas. >>> >  > >>> >  > If you plan to use different CA certificates for >>> client and server in >>> >  > Octavia, you'll need to add that into the playbook. >>> Presently it only >>> >  > copies over ca_01.pem, cacert.key, and client.pem and >>> uses them for >>> >  > everything. I was completely unable to make it work >>> with only one CA >>> >  > as I got some SSL errors. It passes gate though, so I >>> aasume it must >>> >  > work? I dunno. >>> >  > >>> >  > Networking comments and a really messy kolla-ansible / >>> octavia >>> >  how-to below... >>> >  > >>> >  > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann >>> >  > >> >>> >  >> >> wrote: >>> >  >> >>> >  >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: >>> >  >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann >>> >  >>> >> >>> >  >> >> wrote: >>> >  >>>> >>> >  >>>> Hi, >>> >  >>>> >>> >  >>>> We did test Octavia with Pike (DVR deployment) and >>> everything was >>> >  >>>> working right our of the box. We changed our >>> underlay network to a >>> >  >>>> Layer3 spine-leaf network now and did not deploy >>> DVR as we >>> >  don't wanted >>> >  >>>> to have that much cables in a rack. >>> >  >>>> >>> >  >>>> Octavia is not working right now as the lb-mgmt-net >>> does not >>> >  exist on >>> >  >>>> the compute nodes nor does a br-ex. >>> >  >>>> >>> >  >>>> The control nodes running >>> >  >>>> >>> >  >>>> octavia_worker >>> >  >>>> octavia_housekeeping >>> >  >>>> octavia_health_manager >>> >  >>>> octavia_api >>> >  >>>> >>> >  Amphorae-VMs, z.b. >>> > >>> >  lb-mgmt-net 172.16.0.0/16 >>> default GW >>> >  >>>> and as far as I understood octavia_worker, >>> >  octavia_housekeeping and >>> >  >>>> octavia_health_manager have to talk to the amphora >>> instances. >>> >  But the >>> >  >>>> control nodes are spread over three different >>> leafs. So each >>> >  control >>> >  >>>> node in a different L2 domain. >>> >  >>>> >>> >  >>>> So the question is how to deploy a lb-mgmt-net >>> network in our >>> >  setup? >>> >  >>>> >>> >  >>>> - Compute nodes have no "stretched" L2 domain >>> >  >>>> - Control nodes, compute nodes and network nodes >>> are in L3 >>> >  networks like >>> >  >>>> api, storage, ... >>> >  >>>> - Only network nodes are connected to a L2 domain >>> (with a >>> >  separated NIC) >>> >  >>>> providing the "public" network >>> >  >>>> >>> >  >>> You'll need to add a new bridge to your compute >>> nodes and create a >>> >  >>> provider network associated with that bridge. In my >>> setup this is >>> >  >>> simply a flat network tied to a tagged interface. In >>> your case it >>> >  >>> probably makes more sense to make a new VNI and >>> create a vxlan >>> >  >>> provider network. The routing in your switches >>> should handle >>> >  the rest. >>> >  >> >>> >  >> Ok that's what I try right now. But I don't get how >>> to setup >>> >  something >>> >  >> like a VxLAN provider Network. I thought only vlan >>> and flat is >>> >  supported >>> >  >> as provider network? I guess it is not possible to >>> use the tunnel >>> >  >> interface that is used for tenant networks? >>> >  >> So I have to create a separated VxLAN on the control >>> and compute >>> >  nodes like: >>> >  >> >>> >  >> # ip link add vxoctavia type vxlan id 42 dstport 4790 >>> group >>> >  239.1.1.1 >>> >  >> dev vlan3535 ttl 5 >>> >  >> # ip addr add 172.16.1.11/20 >>> dev vxoctavia >>> >  >> # ip link set vxoctavia up >>> >  >> >>> >  >> and use it like a flat provider network, true? >>> >  >> >>> >  > This is a fine way of doing things, but it's only half >>> the battle. >>> >  > You'll need to add a bridge on the compute nodes and >>> bind it to that >>> >  > new interface. Something like this if you're using >>> openvswitch: >>> >  > >>> >  > docker exec openvswitch_db >>> >  > /usr/local/bin/kolla_ensure_openvswitch_configured >>> br-mgmt vxoctavia >>> >  > >>> >  > Also you'll want to remove the IP address from that >>> interface as it's >>> >  > going to be a bridge. Think of it like your public >>> (br-ex) interface >>> >  > on your network nodes. >>> >  > >>> >  >  From there you'll need to update the bridge mappings >>> via kolla >>> >  > overrides. This would usually be in >>> /etc/kolla/config/neutron. Create >>> >  > a subdirectory for your compute inventory group and >>> create an >>> >  > ml2_conf.ini there. So you'd end up with something >>> like: >>> >  > >>> >  > [root at kolla-deploy ~]# cat >>> >  /etc/kolla/config/neutron/compute/ml2_conf.ini >>> >  > [ml2_type_flat] >>> >  > flat_networks = mgmt-net >>> >  > >>> >  > [ovs] >>> >  > bridge_mappings = mgmt-net:br-mgmt >>> >  > >>> >  > run kolla-ansible --tags neutron reconfigure to push >>> out the new >>> >  > configs. Note that there is a bug where the neutron >>> containers >>> >  may not >>> >  > restart after the change, so you'll probably need to >>> do a 'docker >>> >  > container restart neutron_openvswitch_agent' on each >>> compute node. >>> >  > >>> >  > At this point, you'll need to create the provider >>> network in the >>> >  admin >>> >  > project like: >>> >  > >>> >  > openstack network create --provider-network-type flat >>> >  > --provider-physical-network mgmt-net lb-mgmt-net >>> >  > >>> >  > And then create a normal subnet attached to this >>> network with some >>> >  > largeish address scope. I wouldn't use 172.16.0.0/16 >>> >>> >  because docker >>> >  > uses that by default. I'm not sure if it matters since >>> the network >>> >  > traffic will be isolated on a bridge, but it makes me >>> paranoid so I >>> >  > avoided it. >>> >  > >>> >  > For your controllers, I think you can just let >>> everything >>> >  function off >>> >  > your api interface since you're routing in your >>> spines. Set up a >>> >  > gateway somewhere from that lb-mgmt network and save >>> yourself the >>> >  > complication of adding an interface to your >>> controllers. If you >>> >  choose >>> >  > to use a separate interface on your controllers, >>> you'll need to make >>> >  > sure this patch is in your kolla-ansible install or >>> cherry pick it. >>> >  > >>> >  > >>> > >>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 >>> >>> >  > >>> >  > I don't think that's been backported at all, so unless >>> you're running >>> >  > off master you'll need to go get it. >>> >  > >>> >  >  From here on out, the regular Octavia instruction >>> should serve you. >>> >  > Create a flavor, Create a security group, and capture >>> their UUIDs >>> >  > along with the UUID of the provider network you made. >>> Override >>> >  them in >>> >  > globals.yml with: >>> >  > >>> >  > octavia_amp_boot_network_list: >>> >  > octavia_amp_secgroup_list: >>> >  > octavia_amp_flavor_id: >>> >  > >>> >  > This is all from my scattered notes and bad memory. >>> Hopefully it >>> >  makes >>> >  > sense. Corrections welcome. >>> >  > >>> >  > -Erik >>> >  > >>> >  > >>> >  > >>> >  >> >>> >  >> >>> >  >>> >>> >  >>> -Erik >>> >  >>>> >>> >  >>>> All the best, >>> >  >>>> Florian >>> >  >>>> _______________________________________________ >>> >  >>>> OpenStack-operators mailing list >>> >  >>>> OpenStack-operators at lists.openstack.org >>> >>> >  >> > >>> >  >>>> >>> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> > >>> >  -- >>> > >>> >  EveryWare AG >>> >  Florian Engelmann >>> >  Systems Engineer >>> >  Zurlindenstrasse 52a >>> >  CH-8003 Zürich >>> > >>> >  tel: +41 44 466 60 00 >>> >  fax: +41 44 466 60 10 >>> >  mail: mailto:florian.engelmann at everyware.ch >>> >>> >  >> > >>> >  web: http://www.everyware.ch >>> > >>> >>> -- >>> EveryWare AG >>> Florian Engelmann >>> Systems Engineer >>> Zurlindenstrasse 52a >>> CH-8003 Zürich >>> >>> tel: +41 44 466 60 00 >>> fax: +41 44 466 60 10 >>> mail: mailto:florian.engelmann at everyware.ch >>> >>> web: http://www.everyware.ch >>> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch From florian.engelmann at everyware.ch Thu Oct 25 15:34:47 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Thu, 25 Oct 2018 17:34:47 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: <860d4115-e993-070f-fe3e-b81f21283859@everyware.ch> References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> <1540409584381.85393@everyware.ch> <016437c7-1319-96ed-614c-5f45c5672748@everyware.ch> <3fbb7ddb-91d6-b238-3ac4-8259e938ae53@everyware.ch> <860d4115-e993-070f-fe3e-b81f21283859@everyware.ch> Message-ID: <76d2f8f0-04df-131b-5817-88da284778cf@everyware.ch> I managed to configure o-hm0 on the compute nodes and I am able to communicate with the amphorae: # create Octavia management net openstack network create lb-mgmt-net -f value -c id # and the subnet openstack subnet create --subnet-range 172.31.0.0/16 --allocation-pool start=172.31.17.10,end=172.31.255.250 --network lb-mgmt-net lb-mgmt-subnet # get the subnet ID openstack subnet show lb-mgmt-subnet -f value -c id # create a port in this subnet for the compute node (ewos1-com1a-poc2) openstack port create --security-group octavia --device-owner Octavia:health-mgr --host=ewos1-com1a-poc2 -c id -f value --network lb-mgmt-net --fixed-ip subnet=b4c70178-949b-4d60-8d9f-09d13f720b6a,ip-address=172.31.0.101 octavia-health-manager-ewos1-com1a-poc2-listen-port openstack port show 6fb13c3f-469e-4a81-a504-a161c6848654 openstack network show lb-mgmt-net -f value -c id # edit octavia_amp_boot_network_list: 3633be41-926f-4a2c-8803-36965f76ea8d vi /etc/kolla/globals.yml # reconfigure octavia kolla-ansible -i inventory reconfigure -t octavia # create o-hm0 on the compute node docker exec ovs-vsctl -- --may-exist add-port br-int o-hm0 -- \ set Interface o-hm0 type=internal -- \ set Interface o-hm0 external-ids:iface-status=active -- \ set Interface o-hm0 external-ids:attached-mac=fa:16:3e:51:e9:c3 -- \ set Interface o-hm0 external-ids:iface-id=6fb13c3f-469e-4a81-a504-a161c6848654 -- \ set Interface o-hm0 external-ids:skip_cleanup=true # fix MAC of o-hm0 ip link set dev o-hm0 address fa:16:3e:51:e9:c3 # get IP from neutron DHCP agent (should get IP: 172.31.0.101 in this example) ip link set dev o-hm0 up dhclient -v o-hm0 # create a loadbalancer and test connectivity, eg. amphorae IP is 172.31.17.15 root at ewos1-com1a-poc2:~# ping 172.31.17.15 But octavia_worker octavia_housekeeping octavia_health_manager are running on our control nodes and those are not running any OVS networks. Next test is to deploy those three services to my network nodes and configure o-hm0 on the network nodes. I will have to fix bind_port = 5555 bind_ip = 10.33.16.11 controller_ip_port_list = 10.33.16.11:5555 to bind to all IPs or the IP of o-hm0. Am 10/25/18 um 4:39 PM schrieb Florian Engelmann: > It looks like devstack implemented some o-hm0 interface to connect the > physical control host to a VxLAN. > In our case there is no VxLAN at the control nodes nor is OVS. > > Is it a option to deploy those Octavia services needing this conenction > to the compute or network nodes and use o-hm0? > > Am 10/25/18 um 10:22 AM schrieb Florian Engelmann: >> Or could I create lb-mgmt-net as VxLAN and connect the control nodes >> to this VxLAN? How to do something like that? >> >> Am 10/25/18 um 10:03 AM schrieb Florian Engelmann: >>> Hmm - so right now I can't see any routed option because: >>> >>> The gateway connected to the VLAN provider networks (bond1 on the >>> network nodes) is not able to route any traffic to my control nodes >>> in the spine-leaf layer3 backend network. >>> >>> And right now there is no br-ex at all nor any "streched" L2 domain >>> connecting all compute nodes. >>> >>> >>> So the only solution I can think of right now is to create an overlay >>> VxLAN in the spine-leaf backend network, connect all compute and >>> control nodes to this overlay L2 network, create a OVS bridge >>> connected to that network on the compute nodes and allow the Amphorae >>> to get an IPin this network as well. >>> Not to forget about DHCP... so the network nodes will need this >>> bridge as well. >>> >>> Am 10/24/18 um 10:01 PM schrieb Erik McCormick: >>>> >>>> >>>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian >>>> >>> > wrote: >>>> >>>>     On the network nodes we've got a dedicated interface to deploy >>>> VLANs >>>>     (like the provider network for internet access). What about >>>> creating >>>>     another VLAN on the network nodes, give that bridge a IP which is >>>>     part of the subnet of lb-mgmt-net and start the octavia worker, >>>>     healthmanager and controller on the network nodes binding to >>>> that IP? >>>> >>>> The problem with that is you can't out an IP in the vlan interface >>>> and also use it as an OVS bridge, so the Octavia processes would >>>> have nothing to bind to. >>>> >>>> >>>> ------------------------------------------------------------------------ >>>> >>>>     *From:* Erik McCormick >>>     > >>>>     *Sent:* Wednesday, October 24, 2018 6:18 PM >>>>     *To:* Engelmann Florian >>>>     *Cc:* openstack-operators >>>>     *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and >>>>     VxLAN without DVR >>>> >>>> >>>>     On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann >>>>     >>>     > wrote: >>>> >>>>         Am 10/24/18 um 2:08 PM schrieb Erik McCormick: >>>>          > >>>>          > >>>>          > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann >>>>          > >>>         >>>>         >>>         >> >>>>          > wrote: >>>>          > >>>>          >     Ohoh - thank you for your empathy :) >>>>          >     And those great details about how to setup this mgmt >>>> network. >>>>          >     I will try to do so this afternoon but solving that >>>>         routing "puzzle" >>>>          >     (virtual network to control nodes) I will need our >>>>         network guys to help >>>>          >     me out... >>>>          > >>>>          >     But I will need to tell all Amphorae a static route to >>>>         the gateway that >>>>          >     is routing to the control nodes? >>>>          > >>>>          > >>>>          > Just set the default gateway when you create the neutron >>>>         subnet. No need >>>>          > for excess static routes. The route on the other connection >>>>         won't >>>>          > interfere with it as it lives in a namespace. >>>> >>>> >>>>         My compute nodes have no br-ex and there is no L2 domain spread >>>>         over all >>>>         compute nodes. As far as I understood lb-mgmt-net is a provider >>>>         network >>>>         and has to be flat or VLAN and will need a "physical" gateway >>>>         (as there >>>>         is no virtual router). >>>>         So the question - is it possible to get octavia up and running >>>>         without a >>>>         br-ex (L2 domain spread over all compute nodes) on the compute >>>>         nodes? >>>> >>>> >>>>     Sorry, I only meant it was *like* br-ex on your network nodes. You >>>>     don't need that on your computes. >>>> >>>>     The router here would be whatever does routing in your physical >>>>     network. Setting the gateway in the neutron subnet simply adds that >>>>     to the DHCP information sent to the amphorae. >>>> >>>>     This does bring up another thingI forgot  though. You'll probably >>>>     want to add the management network / bridge to your network >>>> nodes or >>>>     wherever you run the DHCP agents. When you create the subnet, be >>>>     sure to leave some space in the address scope for the physical >>>>     devices with static IPs. >>>> >>>>     As for multiple L2 domains, I can't think of a way to go about that >>>>     for the lb-mgmt network. It's a single network with a single >>>> subnet. >>>>     Perhaps you could limit load balancers to an AZ in a single rack? >>>>     Seems not very HA friendly. >>>> >>>> >>>> >>>>          > >>>>          > >>>>          > >>>>          >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick: >>>>          >      > So in your other email you said asked if there was a >>>>         guide for >>>>          >      > deploying it with Kolla ansible... >>>>          >      > >>>>          >      > Oh boy. No there's not. I don't know if you've >>>> seen my >>>>         recent >>>>          >     mails on >>>>          >      > Octavia, but I am going through this deployment >>>>         process with >>>>          >      > kolla-ansible right now and it is lacking in a few >>>> areas. >>>>          >      > >>>>          >      > If you plan to use different CA certificates for >>>>         client and server in >>>>          >      > Octavia, you'll need to add that into the playbook. >>>>         Presently it only >>>>          >      > copies over ca_01.pem, cacert.key, and client.pem and >>>>         uses them for >>>>          >      > everything. I was completely unable to make it work >>>>         with only one CA >>>>          >      > as I got some SSL errors. It passes gate though, so I >>>>         aasume it must >>>>          >      > work? I dunno. >>>>          >      > >>>>          >      > Networking comments and a really messy >>>> kolla-ansible / >>>>         octavia >>>>          >     how-to below... >>>>          >      > >>>>          >      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann >>>>          >      > >>>         >>>>          >     >>>         >> wrote: >>>>          >      >> >>>>          >      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: >>>>          >      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann >>>>          >      >>> >>>         >>>>          >     >>>         >> wrote: >>>>          >      >>>> >>>>          >      >>>> Hi, >>>>          >      >>>> >>>>          >      >>>> We did test Octavia with Pike (DVR deployment) and >>>>         everything was >>>>          >      >>>> working right our of the box. We changed our >>>>         underlay network to a >>>>          >      >>>> Layer3 spine-leaf network now and did not deploy >>>>         DVR as we >>>>          >     don't wanted >>>>          >      >>>> to have that much cables in a rack. >>>>          >      >>>> >>>>          >      >>>> Octavia is not working right now as the >>>> lb-mgmt-net >>>>         does not >>>>          >     exist on >>>>          >      >>>> the compute nodes nor does a br-ex. >>>>          >      >>>> >>>>          >      >>>> The control nodes running >>>>          >      >>>> >>>>          >      >>>> octavia_worker >>>>          >      >>>> octavia_housekeeping >>>>          >      >>>> octavia_health_manager >>>>          >      >>>> octavia_api >>>>          >      >>>> >>>>          >     Amphorae-VMs, z.b. >>>>          > >>>>          >     lb-mgmt-net 172.16.0.0/16 >>>>         default GW >>>>          >      >>>> and as far as I understood octavia_worker, >>>>          >     octavia_housekeeping and >>>>          >      >>>> octavia_health_manager have to talk to the amphora >>>>         instances. >>>>          >     But the >>>>          >      >>>> control nodes are spread over three different >>>>         leafs. So each >>>>          >     control >>>>          >      >>>> node in a different L2 domain. >>>>          >      >>>> >>>>          >      >>>> So the question is how to deploy a lb-mgmt-net >>>>         network in our >>>>          >     setup? >>>>          >      >>>> >>>>          >      >>>> - Compute nodes have no "stretched" L2 domain >>>>          >      >>>> - Control nodes, compute nodes and network nodes >>>>         are in L3 >>>>          >     networks like >>>>          >      >>>> api, storage, ... >>>>          >      >>>> - Only network nodes are connected to a L2 domain >>>>         (with a >>>>          >     separated NIC) >>>>          >      >>>> providing the "public" network >>>>          >      >>>> >>>>          >      >>> You'll need to add a new bridge to your compute >>>>         nodes and create a >>>>          >      >>> provider network associated with that bridge. In my >>>>         setup this is >>>>          >      >>> simply a flat network tied to a tagged >>>> interface. In >>>>         your case it >>>>          >      >>> probably makes more sense to make a new VNI and >>>>         create a vxlan >>>>          >      >>> provider network. The routing in your switches >>>>         should handle >>>>          >     the rest. >>>>          >      >> >>>>          >      >> Ok that's what I try right now. But I don't get how >>>>         to setup >>>>          >     something >>>>          >      >> like a VxLAN provider Network. I thought only vlan >>>>         and flat is >>>>          >     supported >>>>          >      >> as provider network? I guess it is not possible to >>>>         use the tunnel >>>>          >      >> interface that is used for tenant networks? >>>>          >      >> So I have to create a separated VxLAN on the control >>>>         and compute >>>>          >     nodes like: >>>>          >      >> >>>>          >      >> # ip link add vxoctavia type vxlan id 42 dstport >>>> 4790 >>>>         group >>>>          >     239.1.1.1 >>>>          >      >> dev vlan3535 ttl 5 >>>>          >      >> # ip addr add 172.16.1.11/20 >>>>         dev vxoctavia >>>>          >      >> # ip link set vxoctavia up >>>>          >      >> >>>>          >      >> and use it like a flat provider network, true? >>>>          >      >> >>>>          >      > This is a fine way of doing things, but it's only >>>> half >>>>         the battle. >>>>          >      > You'll need to add a bridge on the compute nodes and >>>>         bind it to that >>>>          >      > new interface. Something like this if you're using >>>>         openvswitch: >>>>          >      > >>>>          >      > docker exec openvswitch_db >>>>          >      > /usr/local/bin/kolla_ensure_openvswitch_configured >>>>         br-mgmt vxoctavia >>>>          >      > >>>>          >      > Also you'll want to remove the IP address from that >>>>         interface as it's >>>>          >      > going to be a bridge. Think of it like your public >>>>         (br-ex) interface >>>>          >      > on your network nodes. >>>>          >      > >>>>          >      >  From there you'll need to update the bridge mappings >>>>         via kolla >>>>          >      > overrides. This would usually be in >>>>         /etc/kolla/config/neutron. Create >>>>          >      > a subdirectory for your compute inventory group and >>>>         create an >>>>          >      > ml2_conf.ini there. So you'd end up with something >>>> like: >>>>          >      > >>>>          >      > [root at kolla-deploy ~]# cat >>>>          >     /etc/kolla/config/neutron/compute/ml2_conf.ini >>>>          >      > [ml2_type_flat] >>>>          >      > flat_networks = mgmt-net >>>>          >      > >>>>          >      > [ovs] >>>>          >      > bridge_mappings = mgmt-net:br-mgmt >>>>          >      > >>>>          >      > run kolla-ansible --tags neutron reconfigure to push >>>>         out the new >>>>          >      > configs. Note that there is a bug where the neutron >>>>         containers >>>>          >     may not >>>>          >      > restart after the change, so you'll probably need to >>>>         do a 'docker >>>>          >      > container restart neutron_openvswitch_agent' on each >>>>         compute node. >>>>          >      > >>>>          >      > At this point, you'll need to create the provider >>>>         network in the >>>>          >     admin >>>>          >      > project like: >>>>          >      > >>>>          >      > openstack network create --provider-network-type flat >>>>          >      > --provider-physical-network mgmt-net lb-mgmt-net >>>>          >      > >>>>          >      > And then create a normal subnet attached to this >>>>         network with some >>>>          >      > largeish address scope. I wouldn't use 172.16.0.0/16 >>>>         >>>>          >      because docker >>>>          >      > uses that by default. I'm not sure if it matters >>>> since >>>>         the network >>>>          >      > traffic will be isolated on a bridge, but it makes me >>>>         paranoid so I >>>>          >      > avoided it. >>>>          >      > >>>>          >      > For your controllers, I think you can just let >>>> everything >>>>          >     function off >>>>          >      > your api interface since you're routing in your >>>>         spines. Set up a >>>>          >      > gateway somewhere from that lb-mgmt network and save >>>>         yourself the >>>>          >      > complication of adding an interface to your >>>>         controllers. If you >>>>          >     choose >>>>          >      > to use a separate interface on your controllers, >>>>         you'll need to make >>>>          >      > sure this patch is in your kolla-ansible install or >>>>         cherry pick it. >>>>          >      > >>>>          >      > >>>>          > >>>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 >>>> >>>>          >      > >>>>          >      > I don't think that's been backported at all, so >>>> unless >>>>         you're running >>>>          >      > off master you'll need to go get it. >>>>          >      > >>>>          >      >  From here on out, the regular Octavia instruction >>>>         should serve you. >>>>          >      > Create a flavor, Create a security group, and capture >>>>         their UUIDs >>>>          >      > along with the UUID of the provider network you made. >>>>         Override >>>>          >     them in >>>>          >      > globals.yml with: >>>>          >      > >>>>          >      > octavia_amp_boot_network_list: >>>>          >      > octavia_amp_secgroup_list: >>>>          >      > octavia_amp_flavor_id: >>>>          >      > >>>>          >      > This is all from my scattered notes and bad memory. >>>>         Hopefully it >>>>          >     makes >>>>          >      > sense. Corrections welcome. >>>>          >      > >>>>          >      > -Erik >>>>          >      > >>>>          >      > >>>>          >      > >>>>          >      >> >>>>          >      >> >>>>          >      >>> >>>>          >      >>> -Erik >>>>          >      >>>> >>>>          >      >>>> All the best, >>>>          >      >>>> Florian >>>>          >      >>>> _______________________________________________ >>>>          >      >>>> OpenStack-operators mailing list >>>>          >      >>>> OpenStack-operators at lists.openstack.org >>>>         >>>>          >     >>>         > >>>>          >      >>>> >>>>          > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>>          > >>>>          >     -- >>>>          > >>>>          >     EveryWare AG >>>>          >     Florian Engelmann >>>>          >     Systems Engineer >>>>          >     Zurlindenstrasse 52a >>>>          >     CH-8003 Zürich >>>>          > >>>>          >     tel: +41 44 466 60 00 >>>>          >     fax: +41 44 466 60 10 >>>>          >     mail: mailto:florian.engelmann at everyware.ch >>>>         >>>>          >     >>>         > >>>>          >     web: http://www.everyware.ch >>>>          > >>>> >>>>         -- >>>>         EveryWare AG >>>>         Florian Engelmann >>>>         Systems Engineer >>>>         Zurlindenstrasse 52a >>>>         CH-8003 Zürich >>>> >>>>         tel: +41 44 466 60 00 >>>>         fax: +41 44 466 60 10 >>>>         mail: mailto:florian.engelmann at everyware.ch >>>>         >>>>         web: http://www.everyware.ch >>>> >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From florian.engelmann at everyware.ch Thu Oct 25 15:37:21 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Thu, 25 Oct 2018 17:37:21 +0200 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C20F4A7@EX10MBOX03.pnnl.gov> References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> <1540409584381.85393@everyware.ch> <016437c7-1319-96ed-614c-5f45c5672748@everyware.ch> <3fbb7ddb-91d6-b238-3ac4-8259e938ae53@everyware.ch> <860d4115-e993-070f-fe3e-b81f21283859@everyware.ch> <1A3C52DFCD06494D8528644858247BF01C20F4A7@EX10MBOX03.pnnl.gov> Message-ID: <3540494a-c746-69a8-f5b7-2a717b8bbe0c@everyware.ch> you mean deploy octavia into an openstack project? But I will than need to connect the octavia services with my galera DBs... so same problem. Am 10/25/18 um 5:31 PM schrieb Fox, Kevin M: > Would it make sense to move the control plane for this piece into the cluster? (vm in a mangement tenant?) > > Thanks, > Kevin > ________________________________________ > From: Florian Engelmann [florian.engelmann at everyware.ch] > Sent: Thursday, October 25, 2018 7:39 AM > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR > > It looks like devstack implemented some o-hm0 interface to connect the > physical control host to a VxLAN. > In our case there is no VxLAN at the control nodes nor is OVS. > > Is it a option to deploy those Octavia services needing this conenction > to the compute or network nodes and use o-hm0? > > Am 10/25/18 um 10:22 AM schrieb Florian Engelmann: >> Or could I create lb-mgmt-net as VxLAN and connect the control nodes to >> this VxLAN? How to do something like that? >> >> Am 10/25/18 um 10:03 AM schrieb Florian Engelmann: >>> Hmm - so right now I can't see any routed option because: >>> >>> The gateway connected to the VLAN provider networks (bond1 on the >>> network nodes) is not able to route any traffic to my control nodes in >>> the spine-leaf layer3 backend network. >>> >>> And right now there is no br-ex at all nor any "streched" L2 domain >>> connecting all compute nodes. >>> >>> >>> So the only solution I can think of right now is to create an overlay >>> VxLAN in the spine-leaf backend network, connect all compute and >>> control nodes to this overlay L2 network, create a OVS bridge >>> connected to that network on the compute nodes and allow the Amphorae >>> to get an IPin this network as well. >>> Not to forget about DHCP... so the network nodes will need this bridge >>> as well. >>> >>> Am 10/24/18 um 10:01 PM schrieb Erik McCormick: >>>> >>>> >>>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian >>>> >>> > wrote: >>>> >>>> On the network nodes we've got a dedicated interface to deploy VLANs >>>> (like the provider network for internet access). What about creating >>>> another VLAN on the network nodes, give that bridge a IP which is >>>> part of the subnet of lb-mgmt-net and start the octavia worker, >>>> healthmanager and controller on the network nodes binding to that >>>> IP? >>>> >>>> The problem with that is you can't out an IP in the vlan interface >>>> and also use it as an OVS bridge, so the Octavia processes would have >>>> nothing to bind to. >>>> >>>> >>>> ------------------------------------------------------------------------ >>>> *From:* Erik McCormick >>> > >>>> *Sent:* Wednesday, October 24, 2018 6:18 PM >>>> *To:* Engelmann Florian >>>> *Cc:* openstack-operators >>>> *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and >>>> VxLAN without DVR >>>> >>>> >>>> On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann >>>> >>> > wrote: >>>> >>>> Am 10/24/18 um 2:08 PM schrieb Erik McCormick: >>>> > >>>> > >>>> > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann >>>> > >>> >>>> >>> >> >>>> > wrote: >>>> > >>>> >  Ohoh - thank you for your empathy :) >>>> >  And those great details about how to setup this mgmt >>>> network. >>>> >  I will try to do so this afternoon but solving that >>>> routing "puzzle" >>>> >  (virtual network to control nodes) I will need our >>>> network guys to help >>>> >  me out... >>>> > >>>> >  But I will need to tell all Amphorae a static route to >>>> the gateway that >>>> >  is routing to the control nodes? >>>> > >>>> > >>>> > Just set the default gateway when you create the neutron >>>> subnet. No need >>>> > for excess static routes. The route on the other connection >>>> won't >>>> > interfere with it as it lives in a namespace. >>>> >>>> >>>> My compute nodes have no br-ex and there is no L2 domain spread >>>> over all >>>> compute nodes. As far as I understood lb-mgmt-net is a provider >>>> network >>>> and has to be flat or VLAN and will need a "physical" gateway >>>> (as there >>>> is no virtual router). >>>> So the question - is it possible to get octavia up and running >>>> without a >>>> br-ex (L2 domain spread over all compute nodes) on the compute >>>> nodes? >>>> >>>> >>>> Sorry, I only meant it was *like* br-ex on your network nodes. You >>>> don't need that on your computes. >>>> >>>> The router here would be whatever does routing in your physical >>>> network. Setting the gateway in the neutron subnet simply adds that >>>> to the DHCP information sent to the amphorae. >>>> >>>> This does bring up another thingI forgot though. You'll probably >>>> want to add the management network / bridge to your network nodes or >>>> wherever you run the DHCP agents. When you create the subnet, be >>>> sure to leave some space in the address scope for the physical >>>> devices with static IPs. >>>> >>>> As for multiple L2 domains, I can't think of a way to go about that >>>> for the lb-mgmt network. It's a single network with a single subnet. >>>> Perhaps you could limit load balancers to an AZ in a single rack? >>>> Seems not very HA friendly. >>>> >>>> >>>> >>>> > >>>> > >>>> > >>>> >  Am 10/23/18 um 6:57 PM schrieb Erik McCormick: >>>> >  > So in your other email you said asked if there was a >>>> guide for >>>> >  > deploying it with Kolla ansible... >>>> >  > >>>> >  > Oh boy. No there's not. I don't know if you've seen my >>>> recent >>>> >  mails on >>>> >  > Octavia, but I am going through this deployment >>>> process with >>>> >  > kolla-ansible right now and it is lacking in a few >>>> areas. >>>> >  > >>>> >  > If you plan to use different CA certificates for >>>> client and server in >>>> >  > Octavia, you'll need to add that into the playbook. >>>> Presently it only >>>> >  > copies over ca_01.pem, cacert.key, and client.pem and >>>> uses them for >>>> >  > everything. I was completely unable to make it work >>>> with only one CA >>>> >  > as I got some SSL errors. It passes gate though, so I >>>> aasume it must >>>> >  > work? I dunno. >>>> >  > >>>> >  > Networking comments and a really messy kolla-ansible / >>>> octavia >>>> >  how-to below... >>>> >  > >>>> >  > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann >>>> >  > >>> >>>> >  >>> >> wrote: >>>> >  >> >>>> >  >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: >>>> >  >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann >>>> >  >>> >>> >>>> >  >>> >> wrote: >>>> >  >>>> >>>> >  >>>> Hi, >>>> >  >>>> >>>> >  >>>> We did test Octavia with Pike (DVR deployment) and >>>> everything was >>>> >  >>>> working right our of the box. We changed our >>>> underlay network to a >>>> >  >>>> Layer3 spine-leaf network now and did not deploy >>>> DVR as we >>>> >  don't wanted >>>> >  >>>> to have that much cables in a rack. >>>> >  >>>> >>>> >  >>>> Octavia is not working right now as the lb-mgmt-net >>>> does not >>>> >  exist on >>>> >  >>>> the compute nodes nor does a br-ex. >>>> >  >>>> >>>> >  >>>> The control nodes running >>>> >  >>>> >>>> >  >>>> octavia_worker >>>> >  >>>> octavia_housekeeping >>>> >  >>>> octavia_health_manager >>>> >  >>>> octavia_api >>>> >  >>>> >>>> >  Amphorae-VMs, z.b. >>>> > >>>> >  lb-mgmt-net 172.16.0.0/16 >>>> default GW >>>> >  >>>> and as far as I understood octavia_worker, >>>> >  octavia_housekeeping and >>>> >  >>>> octavia_health_manager have to talk to the amphora >>>> instances. >>>> >  But the >>>> >  >>>> control nodes are spread over three different >>>> leafs. So each >>>> >  control >>>> >  >>>> node in a different L2 domain. >>>> >  >>>> >>>> >  >>>> So the question is how to deploy a lb-mgmt-net >>>> network in our >>>> >  setup? >>>> >  >>>> >>>> >  >>>> - Compute nodes have no "stretched" L2 domain >>>> >  >>>> - Control nodes, compute nodes and network nodes >>>> are in L3 >>>> >  networks like >>>> >  >>>> api, storage, ... >>>> >  >>>> - Only network nodes are connected to a L2 domain >>>> (with a >>>> >  separated NIC) >>>> >  >>>> providing the "public" network >>>> >  >>>> >>>> >  >>> You'll need to add a new bridge to your compute >>>> nodes and create a >>>> >  >>> provider network associated with that bridge. In my >>>> setup this is >>>> >  >>> simply a flat network tied to a tagged interface. In >>>> your case it >>>> >  >>> probably makes more sense to make a new VNI and >>>> create a vxlan >>>> >  >>> provider network. The routing in your switches >>>> should handle >>>> >  the rest. >>>> >  >> >>>> >  >> Ok that's what I try right now. But I don't get how >>>> to setup >>>> >  something >>>> >  >> like a VxLAN provider Network. I thought only vlan >>>> and flat is >>>> >  supported >>>> >  >> as provider network? I guess it is not possible to >>>> use the tunnel >>>> >  >> interface that is used for tenant networks? >>>> >  >> So I have to create a separated VxLAN on the control >>>> and compute >>>> >  nodes like: >>>> >  >> >>>> >  >> # ip link add vxoctavia type vxlan id 42 dstport 4790 >>>> group >>>> >  239.1.1.1 >>>> >  >> dev vlan3535 ttl 5 >>>> >  >> # ip addr add 172.16.1.11/20 >>>> dev vxoctavia >>>> >  >> # ip link set vxoctavia up >>>> >  >> >>>> >  >> and use it like a flat provider network, true? >>>> >  >> >>>> >  > This is a fine way of doing things, but it's only half >>>> the battle. >>>> >  > You'll need to add a bridge on the compute nodes and >>>> bind it to that >>>> >  > new interface. Something like this if you're using >>>> openvswitch: >>>> >  > >>>> >  > docker exec openvswitch_db >>>> >  > /usr/local/bin/kolla_ensure_openvswitch_configured >>>> br-mgmt vxoctavia >>>> >  > >>>> >  > Also you'll want to remove the IP address from that >>>> interface as it's >>>> >  > going to be a bridge. Think of it like your public >>>> (br-ex) interface >>>> >  > on your network nodes. >>>> >  > >>>> >  >  From there you'll need to update the bridge mappings >>>> via kolla >>>> >  > overrides. This would usually be in >>>> /etc/kolla/config/neutron. Create >>>> >  > a subdirectory for your compute inventory group and >>>> create an >>>> >  > ml2_conf.ini there. So you'd end up with something >>>> like: >>>> >  > >>>> >  > [root at kolla-deploy ~]# cat >>>> >  /etc/kolla/config/neutron/compute/ml2_conf.ini >>>> >  > [ml2_type_flat] >>>> >  > flat_networks = mgmt-net >>>> >  > >>>> >  > [ovs] >>>> >  > bridge_mappings = mgmt-net:br-mgmt >>>> >  > >>>> >  > run kolla-ansible --tags neutron reconfigure to push >>>> out the new >>>> >  > configs. Note that there is a bug where the neutron >>>> containers >>>> >  may not >>>> >  > restart after the change, so you'll probably need to >>>> do a 'docker >>>> >  > container restart neutron_openvswitch_agent' on each >>>> compute node. >>>> >  > >>>> >  > At this point, you'll need to create the provider >>>> network in the >>>> >  admin >>>> >  > project like: >>>> >  > >>>> >  > openstack network create --provider-network-type flat >>>> >  > --provider-physical-network mgmt-net lb-mgmt-net >>>> >  > >>>> >  > And then create a normal subnet attached to this >>>> network with some >>>> >  > largeish address scope. I wouldn't use 172.16.0.0/16 >>>> >>>> >  because docker >>>> >  > uses that by default. I'm not sure if it matters since >>>> the network >>>> >  > traffic will be isolated on a bridge, but it makes me >>>> paranoid so I >>>> >  > avoided it. >>>> >  > >>>> >  > For your controllers, I think you can just let >>>> everything >>>> >  function off >>>> >  > your api interface since you're routing in your >>>> spines. Set up a >>>> >  > gateway somewhere from that lb-mgmt network and save >>>> yourself the >>>> >  > complication of adding an interface to your >>>> controllers. If you >>>> >  choose >>>> >  > to use a separate interface on your controllers, >>>> you'll need to make >>>> >  > sure this patch is in your kolla-ansible install or >>>> cherry pick it. >>>> >  > >>>> >  > >>>> > >>>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 >>>> >>>> >  > >>>> >  > I don't think that's been backported at all, so unless >>>> you're running >>>> >  > off master you'll need to go get it. >>>> >  > >>>> >  >  From here on out, the regular Octavia instruction >>>> should serve you. >>>> >  > Create a flavor, Create a security group, and capture >>>> their UUIDs >>>> >  > along with the UUID of the provider network you made. >>>> Override >>>> >  them in >>>> >  > globals.yml with: >>>> >  > >>>> >  > octavia_amp_boot_network_list: >>>> >  > octavia_amp_secgroup_list: >>>> >  > octavia_amp_flavor_id: >>>> >  > >>>> >  > This is all from my scattered notes and bad memory. >>>> Hopefully it >>>> >  makes >>>> >  > sense. Corrections welcome. >>>> >  > >>>> >  > -Erik >>>> >  > >>>> >  > >>>> >  > >>>> >  >> >>>> >  >> >>>> >  >>> >>>> >  >>> -Erik >>>> >  >>>> >>>> >  >>>> All the best, >>>> >  >>>> Florian >>>> >  >>>> _______________________________________________ >>>> >  >>>> OpenStack-operators mailing list >>>> >  >>>> OpenStack-operators at lists.openstack.org >>>> >>>> >  >>> > >>>> >  >>>> >>>> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> > >>>> >  -- >>>> > >>>> >  EveryWare AG >>>> >  Florian Engelmann >>>> >  Systems Engineer >>>> >  Zurlindenstrasse 52a >>>> >  CH-8003 Zürich >>>> > >>>> >  tel: +41 44 466 60 00 >>>> >  fax: +41 44 466 60 10 >>>> >  mail: mailto:florian.engelmann at everyware.ch >>>> >>>> >  >>> > >>>> >  web: http://www.everyware.ch >>>> > >>>> >>>> -- >>>> EveryWare AG >>>> Florian Engelmann >>>> Systems Engineer >>>> Zurlindenstrasse 52a >>>> CH-8003 Zürich >>>> >>>> tel: +41 44 466 60 00 >>>> fax: +41 44 466 60 10 >>>> mail: mailto:florian.engelmann at everyware.ch >>>> >>>> web: http://www.everyware.ch >>>> >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > -- > > EveryWare AG > Florian Engelmann > Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > web: http://www.everyware.ch > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From Kevin.Fox at pnnl.gov Thu Oct 25 16:20:00 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 25 Oct 2018 16:20:00 +0000 Subject: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR In-Reply-To: <3540494a-c746-69a8-f5b7-2a717b8bbe0c@everyware.ch> References: <79722d60-6891-269c-90f7-19a1e835bb60@everyware.ch> <60b1f464-63bc-01d8-4224-1d072b54bbd5@everyware.ch> <1540409584381.85393@everyware.ch> <016437c7-1319-96ed-614c-5f45c5672748@everyware.ch> <3fbb7ddb-91d6-b238-3ac4-8259e938ae53@everyware.ch> <860d4115-e993-070f-fe3e-b81f21283859@everyware.ch> <1A3C52DFCD06494D8528644858247BF01C20F4A7@EX10MBOX03.pnnl.gov>, <3540494a-c746-69a8-f5b7-2a717b8bbe0c@everyware.ch> Message-ID: <1A3C52DFCD06494D8528644858247BF01C20F57B@EX10MBOX03.pnnl.gov> Can you use a provider network to expose galera to the vm? alternately, you could put a db out in the vm side. You don't strictly need to use the same db for every component. If crossing the streams is hard, maybe avoiding crossing at all is easier? Thanks, Kevin ________________________________________ From: Florian Engelmann [florian.engelmann at everyware.ch] Sent: Thursday, October 25, 2018 8:37 AM To: Fox, Kevin M; openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR you mean deploy octavia into an openstack project? But I will than need to connect the octavia services with my galera DBs... so same problem. Am 10/25/18 um 5:31 PM schrieb Fox, Kevin M: > Would it make sense to move the control plane for this piece into the cluster? (vm in a mangement tenant?) > > Thanks, > Kevin > ________________________________________ > From: Florian Engelmann [florian.engelmann at everyware.ch] > Sent: Thursday, October 25, 2018 7:39 AM > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR > > It looks like devstack implemented some o-hm0 interface to connect the > physical control host to a VxLAN. > In our case there is no VxLAN at the control nodes nor is OVS. > > Is it a option to deploy those Octavia services needing this conenction > to the compute or network nodes and use o-hm0? > > Am 10/25/18 um 10:22 AM schrieb Florian Engelmann: >> Or could I create lb-mgmt-net as VxLAN and connect the control nodes to >> this VxLAN? How to do something like that? >> >> Am 10/25/18 um 10:03 AM schrieb Florian Engelmann: >>> Hmm - so right now I can't see any routed option because: >>> >>> The gateway connected to the VLAN provider networks (bond1 on the >>> network nodes) is not able to route any traffic to my control nodes in >>> the spine-leaf layer3 backend network. >>> >>> And right now there is no br-ex at all nor any "streched" L2 domain >>> connecting all compute nodes. >>> >>> >>> So the only solution I can think of right now is to create an overlay >>> VxLAN in the spine-leaf backend network, connect all compute and >>> control nodes to this overlay L2 network, create a OVS bridge >>> connected to that network on the compute nodes and allow the Amphorae >>> to get an IPin this network as well. >>> Not to forget about DHCP... so the network nodes will need this bridge >>> as well. >>> >>> Am 10/24/18 um 10:01 PM schrieb Erik McCormick: >>>> >>>> >>>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian >>>> >>> > wrote: >>>> >>>> On the network nodes we've got a dedicated interface to deploy VLANs >>>> (like the provider network for internet access). What about creating >>>> another VLAN on the network nodes, give that bridge a IP which is >>>> part of the subnet of lb-mgmt-net and start the octavia worker, >>>> healthmanager and controller on the network nodes binding to that >>>> IP? >>>> >>>> The problem with that is you can't out an IP in the vlan interface >>>> and also use it as an OVS bridge, so the Octavia processes would have >>>> nothing to bind to. >>>> >>>> >>>> ------------------------------------------------------------------------ >>>> *From:* Erik McCormick >>> > >>>> *Sent:* Wednesday, October 24, 2018 6:18 PM >>>> *To:* Engelmann Florian >>>> *Cc:* openstack-operators >>>> *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and >>>> VxLAN without DVR >>>> >>>> >>>> On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann >>>> >>> > wrote: >>>> >>>> Am 10/24/18 um 2:08 PM schrieb Erik McCormick: >>>> > >>>> > >>>> > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann >>>> > >>> >>>> >>> >> >>>> > wrote: >>>> > >>>> >  Ohoh - thank you for your empathy :) >>>> >  And those great details about how to setup this mgmt >>>> network. >>>> >  I will try to do so this afternoon but solving that >>>> routing "puzzle" >>>> >  (virtual network to control nodes) I will need our >>>> network guys to help >>>> >  me out... >>>> > >>>> >  But I will need to tell all Amphorae a static route to >>>> the gateway that >>>> >  is routing to the control nodes? >>>> > >>>> > >>>> > Just set the default gateway when you create the neutron >>>> subnet. No need >>>> > for excess static routes. The route on the other connection >>>> won't >>>> > interfere with it as it lives in a namespace. >>>> >>>> >>>> My compute nodes have no br-ex and there is no L2 domain spread >>>> over all >>>> compute nodes. As far as I understood lb-mgmt-net is a provider >>>> network >>>> and has to be flat or VLAN and will need a "physical" gateway >>>> (as there >>>> is no virtual router). >>>> So the question - is it possible to get octavia up and running >>>> without a >>>> br-ex (L2 domain spread over all compute nodes) on the compute >>>> nodes? >>>> >>>> >>>> Sorry, I only meant it was *like* br-ex on your network nodes. You >>>> don't need that on your computes. >>>> >>>> The router here would be whatever does routing in your physical >>>> network. Setting the gateway in the neutron subnet simply adds that >>>> to the DHCP information sent to the amphorae. >>>> >>>> This does bring up another thingI forgot though. You'll probably >>>> want to add the management network / bridge to your network nodes or >>>> wherever you run the DHCP agents. When you create the subnet, be >>>> sure to leave some space in the address scope for the physical >>>> devices with static IPs. >>>> >>>> As for multiple L2 domains, I can't think of a way to go about that >>>> for the lb-mgmt network. It's a single network with a single subnet. >>>> Perhaps you could limit load balancers to an AZ in a single rack? >>>> Seems not very HA friendly. >>>> >>>> >>>> >>>> > >>>> > >>>> > >>>> >  Am 10/23/18 um 6:57 PM schrieb Erik McCormick: >>>> >  > So in your other email you said asked if there was a >>>> guide for >>>> >  > deploying it with Kolla ansible... >>>> >  > >>>> >  > Oh boy. No there's not. I don't know if you've seen my >>>> recent >>>> >  mails on >>>> >  > Octavia, but I am going through this deployment >>>> process with >>>> >  > kolla-ansible right now and it is lacking in a few >>>> areas. >>>> >  > >>>> >  > If you plan to use different CA certificates for >>>> client and server in >>>> >  > Octavia, you'll need to add that into the playbook. >>>> Presently it only >>>> >  > copies over ca_01.pem, cacert.key, and client.pem and >>>> uses them for >>>> >  > everything. I was completely unable to make it work >>>> with only one CA >>>> >  > as I got some SSL errors. It passes gate though, so I >>>> aasume it must >>>> >  > work? I dunno. >>>> >  > >>>> >  > Networking comments and a really messy kolla-ansible / >>>> octavia >>>> >  how-to below... >>>> >  > >>>> >  > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann >>>> >  > >>> >>>> >  >>> >> wrote: >>>> >  >> >>>> >  >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick: >>>> >  >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann >>>> >  >>> >>> >>>> >  >>> >> wrote: >>>> >  >>>> >>>> >  >>>> Hi, >>>> >  >>>> >>>> >  >>>> We did test Octavia with Pike (DVR deployment) and >>>> everything was >>>> >  >>>> working right our of the box. We changed our >>>> underlay network to a >>>> >  >>>> Layer3 spine-leaf network now and did not deploy >>>> DVR as we >>>> >  don't wanted >>>> >  >>>> to have that much cables in a rack. >>>> >  >>>> >>>> >  >>>> Octavia is not working right now as the lb-mgmt-net >>>> does not >>>> >  exist on >>>> >  >>>> the compute nodes nor does a br-ex. >>>> >  >>>> >>>> >  >>>> The control nodes running >>>> >  >>>> >>>> >  >>>> octavia_worker >>>> >  >>>> octavia_housekeeping >>>> >  >>>> octavia_health_manager >>>> >  >>>> octavia_api >>>> >  >>>> >>>> >  Amphorae-VMs, z.b. >>>> > >>>> >  lb-mgmt-net 172.16.0.0/16 >>>> default GW >>>> >  >>>> and as far as I understood octavia_worker, >>>> >  octavia_housekeeping and >>>> >  >>>> octavia_health_manager have to talk to the amphora >>>> instances. >>>> >  But the >>>> >  >>>> control nodes are spread over three different >>>> leafs. So each >>>> >  control >>>> >  >>>> node in a different L2 domain. >>>> >  >>>> >>>> >  >>>> So the question is how to deploy a lb-mgmt-net >>>> network in our >>>> >  setup? >>>> >  >>>> >>>> >  >>>> - Compute nodes have no "stretched" L2 domain >>>> >  >>>> - Control nodes, compute nodes and network nodes >>>> are in L3 >>>> >  networks like >>>> >  >>>> api, storage, ... >>>> >  >>>> - Only network nodes are connected to a L2 domain >>>> (with a >>>> >  separated NIC) >>>> >  >>>> providing the "public" network >>>> >  >>>> >>>> >  >>> You'll need to add a new bridge to your compute >>>> nodes and create a >>>> >  >>> provider network associated with that bridge. In my >>>> setup this is >>>> >  >>> simply a flat network tied to a tagged interface. In >>>> your case it >>>> >  >>> probably makes more sense to make a new VNI and >>>> create a vxlan >>>> >  >>> provider network. The routing in your switches >>>> should handle >>>> >  the rest. >>>> >  >> >>>> >  >> Ok that's what I try right now. But I don't get how >>>> to setup >>>> >  something >>>> >  >> like a VxLAN provider Network. I thought only vlan >>>> and flat is >>>> >  supported >>>> >  >> as provider network? I guess it is not possible to >>>> use the tunnel >>>> >  >> interface that is used for tenant networks? >>>> >  >> So I have to create a separated VxLAN on the control >>>> and compute >>>> >  nodes like: >>>> >  >> >>>> >  >> # ip link add vxoctavia type vxlan id 42 dstport 4790 >>>> group >>>> >  239.1.1.1 >>>> >  >> dev vlan3535 ttl 5 >>>> >  >> # ip addr add 172.16.1.11/20 >>>> dev vxoctavia >>>> >  >> # ip link set vxoctavia up >>>> >  >> >>>> >  >> and use it like a flat provider network, true? >>>> >  >> >>>> >  > This is a fine way of doing things, but it's only half >>>> the battle. >>>> >  > You'll need to add a bridge on the compute nodes and >>>> bind it to that >>>> >  > new interface. Something like this if you're using >>>> openvswitch: >>>> >  > >>>> >  > docker exec openvswitch_db >>>> >  > /usr/local/bin/kolla_ensure_openvswitch_configured >>>> br-mgmt vxoctavia >>>> >  > >>>> >  > Also you'll want to remove the IP address from that >>>> interface as it's >>>> >  > going to be a bridge. Think of it like your public >>>> (br-ex) interface >>>> >  > on your network nodes. >>>> >  > >>>> >  >  From there you'll need to update the bridge mappings >>>> via kolla >>>> >  > overrides. This would usually be in >>>> /etc/kolla/config/neutron. Create >>>> >  > a subdirectory for your compute inventory group and >>>> create an >>>> >  > ml2_conf.ini there. So you'd end up with something >>>> like: >>>> >  > >>>> >  > [root at kolla-deploy ~]# cat >>>> >  /etc/kolla/config/neutron/compute/ml2_conf.ini >>>> >  > [ml2_type_flat] >>>> >  > flat_networks = mgmt-net >>>> >  > >>>> >  > [ovs] >>>> >  > bridge_mappings = mgmt-net:br-mgmt >>>> >  > >>>> >  > run kolla-ansible --tags neutron reconfigure to push >>>> out the new >>>> >  > configs. Note that there is a bug where the neutron >>>> containers >>>> >  may not >>>> >  > restart after the change, so you'll probably need to >>>> do a 'docker >>>> >  > container restart neutron_openvswitch_agent' on each >>>> compute node. >>>> >  > >>>> >  > At this point, you'll need to create the provider >>>> network in the >>>> >  admin >>>> >  > project like: >>>> >  > >>>> >  > openstack network create --provider-network-type flat >>>> >  > --provider-physical-network mgmt-net lb-mgmt-net >>>> >  > >>>> >  > And then create a normal subnet attached to this >>>> network with some >>>> >  > largeish address scope. I wouldn't use 172.16.0.0/16 >>>> >>>> >  because docker >>>> >  > uses that by default. I'm not sure if it matters since >>>> the network >>>> >  > traffic will be isolated on a bridge, but it makes me >>>> paranoid so I >>>> >  > avoided it. >>>> >  > >>>> >  > For your controllers, I think you can just let >>>> everything >>>> >  function off >>>> >  > your api interface since you're routing in your >>>> spines. Set up a >>>> >  > gateway somewhere from that lb-mgmt network and save >>>> yourself the >>>> >  > complication of adding an interface to your >>>> controllers. If you >>>> >  choose >>>> >  > to use a separate interface on your controllers, >>>> you'll need to make >>>> >  > sure this patch is in your kolla-ansible install or >>>> cherry pick it. >>>> >  > >>>> >  > >>>> > >>>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 >>>> >>>> >  > >>>> >  > I don't think that's been backported at all, so unless >>>> you're running >>>> >  > off master you'll need to go get it. >>>> >  > >>>> >  >  From here on out, the regular Octavia instruction >>>> should serve you. >>>> >  > Create a flavor, Create a security group, and capture >>>> their UUIDs >>>> >  > along with the UUID of the provider network you made. >>>> Override >>>> >  them in >>>> >  > globals.yml with: >>>> >  > >>>> >  > octavia_amp_boot_network_list: >>>> >  > octavia_amp_secgroup_list: >>>> >  > octavia_amp_flavor_id: >>>> >  > >>>> >  > This is all from my scattered notes and bad memory. >>>> Hopefully it >>>> >  makes >>>> >  > sense. Corrections welcome. >>>> >  > >>>> >  > -Erik >>>> >  > >>>> >  > >>>> >  > >>>> >  >> >>>> >  >> >>>> >  >>> >>>> >  >>> -Erik >>>> >  >>>> >>>> >  >>>> All the best, >>>> >  >>>> Florian >>>> >  >>>> _______________________________________________ >>>> >  >>>> OpenStack-operators mailing list >>>> >  >>>> OpenStack-operators at lists.openstack.org >>>> >>>> >  >>> > >>>> >  >>>> >>>> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> > >>>> >  -- >>>> > >>>> >  EveryWare AG >>>> >  Florian Engelmann >>>> >  Systems Engineer >>>> >  Zurlindenstrasse 52a >>>> >  CH-8003 Zürich >>>> > >>>> >  tel: +41 44 466 60 00 >>>> >  fax: +41 44 466 60 10 >>>> >  mail: mailto:florian.engelmann at everyware.ch >>>> >>>> >  >>> > >>>> >  web: http://www.everyware.ch >>>> > >>>> >>>> -- >>>> EveryWare AG >>>> Florian Engelmann >>>> Systems Engineer >>>> Zurlindenstrasse 52a >>>> CH-8003 Zürich >>>> >>>> tel: +41 44 466 60 00 >>>> fax: +41 44 466 60 10 >>>> mail: mailto:florian.engelmann at everyware.ch >>>> >>>> web: http://www.everyware.ch >>>> >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > -- > > EveryWare AG > Florian Engelmann > Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > web: http://www.everyware.ch > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch From edmondsw at us.ibm.com Thu Oct 25 17:29:09 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Thu, 25 Oct 2018 13:29:09 -0400 Subject: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: melanie witt wrote on 10/25/2018 02:14:40 AM: > On Thu, 25 Oct 2018 14:12:51 +0900, ボーアディネシュ[bhor Dinesh] wrote: > > We were having a similar use case like *Preemptible Instances* called as > > *Rich-VM’s* which > > > > are high in resources and are deployed each per hypervisor. We have a > > custom code in > > > > production which tracks the quota for such instances separately and for > > the same reason > > > > we have *rich_instances* custom quota class same as *instances* quota class. > > Please see the last reply I recently sent on this thread. I have been > thinking the same as you about how we could use quota classes to > implement the quota piece of preemptible instances. I think we can > achieve the same thing using unified limits, specifically registered > limits [1], which span across all projects. So, I think we are covered > moving forward with migrating to unified limits and deprecation of quota > classes. Let me know if you spot any issues with this idea. And we could finally close https://bugs.launchpad.net/nova/+bug/1602396 -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.d.moore at nasa.gov Thu Oct 25 22:43:30 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Thu, 25 Oct 2018 22:43:30 +0000 Subject: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: References: Message-ID: <93B1124A-C041-4601-8FBD-7F3DA7923AE8@nasa.gov> I have dug deep into the code for glance, shoving debug outputs to see what I can find in our queens environment. Here is my debug code (I have a lot more but this is the salient part) LOG.debug("in enforce(), action='%s', policyvalues='%s'", action, context.to_policy_values()) return super(Enforcer, self).enforce(action, target, context.to_policy_values(), do_raise=True, exc=exception.Forbidden, action=action) below is the output attempting to set an image that I own while being an admin to public via `openstack image set –public cirros` 2018-10-25 18:29:16.575 17561 DEBUG glance.api.policy [req-e343bb10-8ec8-40df-8c0c-47d1b217ca0d - - - - -] in enforce(), action='publicize_image', policyvalues='{'service_roles': [], 'user_id': None, 'roles': [], 'user_domain_id': None, 'service_project_id': None, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_domain_id': None, 'is_admin_project': True, 'user': None, 'project_id': None, 'tenant': None, 'project_domain_id': None}' enforce /usr/lib/python2.7/site-packages/glance/api/policy.py:64 And here is what shows up when I `openstack image list` as our test user (`jonathan`) that is NOT an admin 2018-10-25 18:32:24.841 17564 DEBUG glance.api.policy [req-22abdcf2-14cd-4680-8deb-e48902a7ddef - - - - -] in enforce(), action='get_images', policyvalues='{'service_roles': [], 'user_id': None, 'roles': [], 'user_domain_id': None, 'service_project_id': None, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_domain_id': None, 'is_admin_project': True, 'user': None, 'project_id': None, 'tenant': None, 'project_domain_id': None}' enforce /usr/lib/python2.7/site-packages/glance/api/policy.py:64 The takeaway that I have is that in the case of get_images, is_admin_project is True, which is WRONG for that test but since it’s a read-only operation it’s content to shortcircuit and return all those images. In the case of publicize_image, the is_admin_project being True isn’t enough, and when it checks user (which is None) it says NOPE. So somehow for some reason glance APIs context is super duper wrong. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Oct 26 00:34:02 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 25 Oct 2018 17:34:02 -0700 Subject: [Openstack-operators] [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: <94c3392c-a4b3-6cfa-4b14-83818807f25a@binero.se> References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> <3f27e1b3-1bce-dd31-d81a-5352ca900ccc@binero.se> <94c3392c-a4b3-6cfa-4b14-83818807f25a@binero.se> Message-ID: FYI, I took some time out this afternoon and wrote a detailed certificate configuration guide. Hopefully this will help. https://review.openstack.org/613454 Reviews would be welcome! Michael On Thu, Oct 25, 2018 at 7:00 AM Tobias Urdin wrote: > > Might as well throw it out here. > > After a lot of troubleshooting we were able to narrow our issue down to > our test environment running qemu virtualization, we moved our compute > node to hardware and > used kvm full virtualization instead. > > We could properly reproduce the issue where generating a CSR from a > private key and then trying to verify the CSR would fail complaining about > "Signature did not match the certificate request" > > We suspect qemu floating point emulation caused this, the same OpenSSL > function that validates a CSR is the one used when validating the SSL > handshake which caused our issue. > After going through the whole stack, we have Octavia working flawlessly > without any issues at all. > > Best regards > Tobias > > On 10/23/2018 04:31 PM, Tobias Urdin wrote: > > Hello Erik, > > > > Could you specify the DNs you used for all certificates just so that I > > can rule it out on my side. > > You can redact anything sensitive with some to just get the feel on how > > it's configured. > > > > Best regards > > Tobias > > > > On 10/22/2018 04:47 PM, Erik McCormick wrote: > >> On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin wrote: > >>> Hello, > >>> > >>> I've been having a lot of issues with SSL certificates myself, on my > >>> second trip now trying to get it working. > >>> > >>> Before I spent a lot of time walking through every line in the DevStack > >>> plugin and fixing my config options, used the generate > >>> script [1] and still it didn't work. > >>> > >>> When I got the "invalid padding" issue it was because of the DN I used > >>> for the CA and the certificate IIRC. > >>> > >>> > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING > >>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > >>> to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa > >>> routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), > >>> ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), > >>> ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > >>> > 19:47 < tobias-urdin> after a quick google "The problem was that my > >>> CA DN was the same as the certificate DN." > >>> > >>> IIRC I think that solved it, but then again I wouldn't remember fully > >>> since I've been at so many different angles by now. > >>> > >>> Here is my IRC logs history from the #openstack-lbaas channel, perhaps > >>> it can help you out > >>> http://paste.openstack.org/show/732575/ > >>> > >> Tobias, I owe you a beer. This was precisely the issue. I'm deploying > >> Octavia with kolla-ansible. It only deploys a single CA. After hacking > >> the templates and playbook to incorporate a separate server CA, the > >> amphorae now load and provision the required namespace. I'm adding a > >> kolla tag to the subject of this in hopes that someone might want to > >> take on changing this behavior in the project. Hopefully after I get > >> through Upstream Institute in Berlin I'll be able to do it myself if > >> nobody else wants to do it. > >> > >> For certificate generation, I extracted the contents of > >> octavia_certs_install.yml (which sets up the directory structure, > >> openssl.cnf, and the client CA), and octavia_certs.yml (which creates > >> the server CA and the client certificate) and mashed them into a > >> separate playbook just for this purpose. At the end I get: > >> > >> ca_01.pem - Client CA Certificate > >> ca_01.key - Client CA Key > >> ca_server_01.pem - Server CA Certificate > >> cakey.pem - Server CA Key > >> client.pem - Concatenated Client Key and Certificate > >> > >> If it would help to have the playbook, I can stick it up on github > >> with a huge "This is a hack" disclaimer on it. > >> > >>> ----- > >>> > >>> Sorry for hijacking the thread but I'm stuck as well. > >>> > >>> I've in the past tried to generate the certificates with [1] but now > >>> moved on to using the openstack-ansible way of generating them [2] > >>> with some modifications. > >>> > >>> Right now I'm just getting: Could not connect to instance. Retrying.: > >>> SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) > >>> from the amphoras, haven't got any further but I've eliminated a lot of > >>> stuck in the middle. > >>> > >>> Tried deploying Ocatavia on Ubuntu with python3 to just make sure there > >>> wasn't an issue with CentOS and OpenSSL versions since it tends to lag > >>> behind. > >>> Checking the amphora with openssl s_client [3] it gives the same one, > >>> but the verification is successful just that I don't understand what the > >>> bad signature > >>> part is about, from browsing some OpenSSL code it seems to be related to > >>> RSA signatures somehow. > >>> > >>> 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad > >>> signature:s3_clnt.c:2032: > >>> > >>> So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS > >>> (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm > >>> back to something related > >>> to the certificates or the communication between the endpoints, or what > >>> actually responds inside the amphora (gunicorn IIUC?). Based on the > >>> "verify" functions actually causing that bad signature error I would > >>> assume it's the generated certificate that the amphora presents that is > >>> causing it. > >>> > >>> I'll have to continue the troubleshooting to the inside of the amphora, > >>> I've used the test-only amphora image before but have now built my own > >>> one that is > >>> using the amphora-agent from the actual stable branch, but same issue > >>> (bad signature). > >>> > >>> For verbosity this is the config options set for the certificates in > >>> octavia.conf and which file it was copied from [4], same here, a > >>> replication of what openstack-ansible does. > >>> > >>> Appreciate any feedback or help :) > >>> > >>> Best regards > >>> Tobias > >>> > >>> [1] > >>> https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh > >>> [2] http://paste.openstack.org/show/732483/ > >>> [3] http://paste.openstack.org/show/732486/ > >>> [4] http://paste.openstack.org/show/732487/ > >>> > >>> On 10/20/2018 01:53 AM, Michael Johnson wrote: > >>>> Hi Erik, > >>>> > >>>> Sorry to hear you are still having certificate issues. > >>>> > >>>> Issue #2 is probably caused by issue #1. Since we hot-plug the tenant > >>>> network for the VIP, one of the first steps after the worker connects > >>>> to the amphora agent is finishing the required configuration of the > >>>> VIP interface inside the network namespace on the amphroa. > >>>> > >> Thanks for the hint on the workflow of this. I hadn't gotten deep > >> enough into the code to find that yet, but I suspected it was blocking > >> since the namespace never got created either. Thanks > >> > >>>> If I remember correctly, you are attempting to configure Octavia with > >>>> the dual CA option (which is good for non-development use). > >>>> > >>>> This is what I have for notes: > >>>> > >>>> [certificates] gets the following: > >>>> cert_generator = local_cert_generator > >>>> ca_certificate = server CA's "server.pem" file > >>>> ca_private_key = server CA's "server.key" file > >>>> ca_private_key_passphrase = pass phrase for ca_private_key > >>>> [controller_worker] > >>>> client_ca = Client CA's ca_cert file > >>>> [haproxy_amphora] > >>>> client_cert = Client CA's client.pem file (I think with it's key > >>>> concatenated is what rm_work said the other day) > >>>> server_ca = Server CA's ca_cert file > >>>> > >> This is all very helpful. It's a bit difficult to know what goes where > >> the way the documentation is written presently. For something that's > >> going to be the defacto standard for loadbalancing, we as a community > >> need to do a better job of documenting how to set up, configure, and > >> manage this in production. I'm trying to capture my lessons learned > >> and processes as I go to help with that if I can. > >> > >> -Erik > >> > >>>> That said, I can probably run through this and write something up next > >>>> week that is more step-by-step/detailed. > >>>> > >>>> Michael > >>>> > >>>> On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick > >>>> wrote: > >>>>> Apologies for cross-posting, but in the event that these might be > >>>>> worth filing as bugs, I wanted the Octavia devs to see it as well... > >>>>> > >>>>> I've been wrestling with getting Octavia up and running and have > >>>>> become stuck on two issues. I'm hoping someone has run into these > >>>>> before. My google foo has come up empty. > >>>>> > >>>>> Issue 1: > >>>>> When the Octavia controller tries to poll the amphora instance, it > >>>>> tries repeatedly and eventually fails. The error on the controller > >>>>> side is: > >>>>> > >>>>> 2018-10-19 14:17:39.181 26 ERROR > >>>>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > >>>>> retries (currently set to 300) exhausted. The amphora is unavailable. > >>>>> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > >>>>> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > >>>>> SSLError(SSLError("bad handshake: Error([('rsa routines', > >>>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > >>>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > >>>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > >>>>> 'tls_process_server_certificate', 'certificate verify > >>>>> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > >>>>> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > >>>>> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > >>>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > >>>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > >>>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > >>>>> 'tls_process_server_certificate', 'certificate verify > >>>>> failed')],)",),)) > >>>>> > >>>>> On the amphora side I see: > >>>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > >>>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > >>>>> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > >>>>> failure (_ssl.c:1754) > >>>>> > >>>>> I've generated certificates both with the script in the Octavia git > >>>>> repo, and with the Openstack Ansible playbook. I can see that they are > >>>>> present in /etc/octavia/certs. > >>>>> > >>>>> I'm using the Kolla (Queens) containers for the control plane so I'm > >>>>> sure I've satisfied all the python library constraints. > >>>>> > >>>>> Issue 2: > >>>>> I"m not sure how it gets configured, but the tenant network interface > >>>>> (ens6) never comes up. I can spawn other instances on that network > >>>>> with no issue, and I can see that Neutron has the port attached to the > >>>>> instance. However, in the instance this is all I get: > >>>>> > >>>>> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > >>>>> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > >>>>> group default qlen 1 > >>>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >>>>> inet 127.0.0.1/8 scope host lo > >>>>> valid_lft forever preferred_lft forever > >>>>> inet6 ::1/128 scope host > >>>>> valid_lft forever preferred_lft forever > >>>>> 2: ens3: mtu 9000 qdisc pfifo_fast > >>>>> state UP group default qlen 1000 > >>>>> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > >>>>> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > >>>>> valid_lft forever preferred_lft forever > >>>>> inet6 fe80::f816:3eff:fe30:c460/64 scope link > >>>>> valid_lft forever preferred_lft forever > >>>>> 3: ens6: mtu 1500 qdisc noop state DOWN group > >>>>> default qlen 1000 > >>>>> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > >>>>> > >>>>> There's no evidence of the interface anywhere else including udev rules. > >>>>> > >>>>> Any help with either or both issues would be greatly appreciated. > >>>>> > >>>>> Cheers, > >>>>> Erik > >>>>> > >>>>> __________________________________________________________________________ > >>>>> OpenStack Development Mailing List (not for usage questions) > >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> __________________________________________________________________________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > >>> __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From michael.d.moore at nasa.gov Fri Oct 26 18:38:22 2018 From: michael.d.moore at nasa.gov (Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]) Date: Fri, 26 Oct 2018 18:38:22 +0000 Subject: [Openstack-operators] [SOLVED] Glance Image Visibility Issue? - Non admin users can see private images from other tenants Message-ID: <1A95E2B0-D6D9-4F78-B91F-9AC3A3C361C4@nasa.gov> TL;DR: glance config doesn’t honor documented default setting for paste_deploy.flavor. Solution is to add setting to glance-api.conf. Patch to be submitted. After the deep debugging yesterday Jonathan did a deep compare of our Mitaka configuration compared to Queens. He noted that this section was missing in our Queens glance-api.conf (our config files are sparse and only specify values if the defaults are not correct for us) [paste_deploy] flavor = keystone Adding that allowed Jonathan to set an image to public (publicize_image). It also made openstack image list (get_images) behave as expected [root at vm013 common]# . /root/keystonerc_jonathan [root at vm013 common]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 34a915b8-cca6-45c3-9348-5e15dace444f | cirros | active | +--------------------------------------+--------+--------+ The Glance Queens configuration guide for glance_api states that the default paste_deploy.flavor setting is ‘keystone’ Refer to https://docs.openstack.org/glance/queens/configuration/glance_api.html It’s readily apparent that without the setting in glance-api.conf that it does not behave properly which suggests it does not actually set keystone as the default Glance common/config.py does not specify a default value for this setting, but it does specify a sample_default. https://github.com/openstack/glance/blob/master/glance/common/config.py lines 31-52 paste_deploy_opts = [ cfg.StrOpt('flavor', sample_default='keystone', help=_(""" Deployment flavor to use in the server application pipeline. Provide a string value representing the appropriate deployment flavor used in the server application pipleline. This is typically the partial name of a pipeline in the paste configuration file with the service name removed. For example, if your paste section name in the paste configuration file is [pipeline:glance-api-keystone], set ``flavor`` to ``keystone``. Possible values: * String value representing a partial pipeline name. Related Options: * config_file """)), Modifying the code like so: sample_default='keystone', default=’keystone’, help=_(""" Makes it honor the documented default value. I’ve submitted this as a patch on the bug report and a pull request on github. https://github.com/openstack/glance/pull/9 Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. From: "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" Date: Thursday, October 25, 2018 at 6:48 PM To: Jonathan Mills , "iain.macdonnell at oracle.com" Cc: "openstack-oper." , "Thompson, John H. (GSFC-606.2)[InuTeq, LLC]" Subject: Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants I have dug deep into the code for glance, shoving debug outputs to see what I can find in our queens environment. Here is my debug code (I have a lot more but this is the salient part) LOG.debug("in enforce(), action='%s', policyvalues='%s'", action, context.to_policy_values()) return super(Enforcer, self).enforce(action, target, context.to_policy_values(), do_raise=True, exc=exception.Forbidden, action=action) below is the output attempting to set an image that I own while being an admin to public via `openstack image set –public cirros` 2018-10-25 18:29:16.575 17561 DEBUG glance.api.policy [req-e343bb10-8ec8-40df-8c0c-47d1b217ca0d - - - - -] in enforce(), action='publicize_image', policyvalues='{'service_roles': [], 'user_id': None, 'roles': [], 'user_domain_id': None, 'service_project_id': None, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_domain_id': None, 'is_admin_project': True, 'user': None, 'project_id': None, 'tenant': None, 'project_domain_id': None}' enforce /usr/lib/python2.7/site-packages/glance/api/policy.py:64 And here is what shows up when I `openstack image list` as our test user (`jonathan`) that is NOT an admin 2018-10-25 18:32:24.841 17564 DEBUG glance.api.policy [req-22abdcf2-14cd-4680-8deb-e48902a7ddef - - - - -] in enforce(), action='get_images', policyvalues='{'service_roles': [], 'user_id': None, 'roles': [], 'user_domain_id': None, 'service_project_id': None, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_domain_id': None, 'is_admin_project': True, 'user': None, 'project_id': None, 'tenant': None, 'project_domain_id': None}' enforce /usr/lib/python2.7/site-packages/glance/api/policy.py:64 The takeaway that I have is that in the case of get_images, is_admin_project is True, which is WRONG for that test but since it’s a read-only operation it’s content to shortcircuit and return all those images. In the case of publicize_image, the is_admin_project being True isn’t enough, and when it checks user (which is None) it says NOPE. So somehow for some reason glance APIs context is super duper wrong. Mike Moore, M.S.S.E. Systems Engineer, Goddard Private Cloud Michael.D.Moore at nasa.gov Hydrogen fusion brightens my day. -------------- next part -------------- An HTML attachment was scrubbed... URL: From iain.macdonnell at oracle.com Fri Oct 26 18:47:40 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Fri, 26 Oct 2018 11:47:40 -0700 Subject: [Openstack-operators] [SOLVED] Glance Image Visibility Issue? - Non admin users can see private images from other tenants In-Reply-To: <1A95E2B0-D6D9-4F78-B91F-9AC3A3C361C4@nasa.gov> References: <1A95E2B0-D6D9-4F78-B91F-9AC3A3C361C4@nasa.gov> Message-ID: <663aa139-0a0c-a0fd-78f3-514f08fe6cd4@oracle.com> Hi Mike, Interesting - nice detective work! FWIW, I do have that explicitly set in my config, based on the recommendation at: https://docs.openstack.org/glance/latest/install/install-rdo.html#install-and-configure-components Your github PR will no go anywhere - all changes must go through the Gerrit system - start at: https://docs.openstack.org/infra/manual/developers.html If you don't want to go through all of that, I may be able to submit a proposed change for you .... ~iain On 10/26/2018 11:38 AM, Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.] wrote: > TL;DR: glance config doesn’t honor documented default setting for > paste_deploy.flavor. Solution is to add setting to glance-api.conf. > Patch to be submitted. > > After the deep debugging yesterday Jonathan did a deep compare of our > Mitaka configuration compared to Queens. > > He noted that this section was missing in our Queens glance-api.conf > (our config files are sparse and only specify values if the defaults are > not correct for us) > > [paste_deploy] > > flavor = keystone > > Adding that allowed Jonathan to set an image to public > (publicize_image). It also made openstack image list (get_images) behave > as expected > > [root at vm013 common]# . /root/keystonerc_jonathan > > [root at vm013 common]# openstack image list > > +--------------------------------------+--------+--------+ > > | ID                                   | Name   | Status | > > +--------------------------------------+--------+--------+ > > | 34a915b8-cca6-45c3-9348-5e15dace444f | cirros | active | > > +--------------------------------------+--------+--------+ > > The Glance Queens configuration guide for glance_api states that the > default paste_deploy.flavor setting is ‘keystone’ > > Refer to > https://docs.openstack.org/glance/queens/configuration/glance_api.html > > > It’s readily apparent that without the setting in glance-api.conf that > it does not behave properly which suggests it does not actually set > keystone as the default > > Glance common/config.py does not specify a default value for this > setting, but it does specify a sample_default. > > https://github.com/openstack/glance/blob/master/glance/common/config.py > > > lines 31-52 > > paste_deploy_opts =[ > > > >     cfg.StrOpt('flavor', > > > > sample_default='keystone', > > > > help=_(""" > > > > Deployment flavor to use in the server application pipeline. > > > > > Provide a string value representing the appropriate deployment > > > > flavor used in the server application pipleline. This is typically > > > > the partial name of a pipeline in the paste configuration file with > > > > the service name removed. > > > > > For example, if your paste section name in the paste configuration > > > > file is [pipeline:glance-api-keystone], set ``flavor`` to > > > > ``keystone``. > > > > > Possible values: > > > >     * String value representing a partial pipeline name. > > > > > Related Options: > > > >     * config_file > > > > > """)), > > Modifying the code like so: > > sample_default='keystone', > >                default=’keystone’, > > help=_(""" > > Makes it honor the documented default value. > > I’ve submitted this as a patch on the bug report and a pull request on > github. > > https://github.com/openstack/glance/pull/9 > > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > ** > > Hydrogen fusion brightens my day. > > *From: *"Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]" > > *Date: *Thursday, October 25, 2018 at 6:48 PM > *To: *Jonathan Mills , "iain.macdonnell at oracle.com" > > *Cc: *"openstack-oper." , > "Thompson, John H. (GSFC-606.2)[InuTeq, LLC]" > *Subject: *Re: [Openstack-operators] Glance Image Visibility Issue? - > Non admin users can see private images from other tenants > > I have dug deep into the code for glance, shoving debug outputs to see > what I can find in our queens environment. > > Here is my debug code (I have a lot more but this is the salient part) > >         LOG.debug("in enforce(), action='%s', policyvalues='%s'", > action, context.to_policy_values()) > >         return super(Enforcer, self).enforce(action, target, > >                                              context.to_policy_values(), > >                                              do_raise=True, > >                                              exc=exception.Forbidden, > >                                              action=action) > > below is the output attempting to set an image that I own while being an > admin to public via `openstack image set –public cirros` > > 2018-10-25 18:29:16.575 17561 DEBUG glance.api.policy > [req-e343bb10-8ec8-40df-8c0c-47d1b217ca0d - - - - -] in enforce(), > action='publicize_image', policyvalues='{'service_roles': [], 'user_id': > None, 'roles': [], 'user_domain_id': None, 'service_project_id': None, > 'service_user_id': None, 'service_user_domain_id': None, > 'service_project_domain_id': None, 'is_admin_project': True, 'user': > None, 'project_id': None, 'tenant': None, 'project_domain_id': None}' > enforce /usr/lib/python2.7/site-packages/glance/api/policy.py:64 > > And here is what shows up when I `openstack image list`  as our test > user (`jonathan`) that is NOT an admin > > 2018-10-25 18:32:24.841 17564 DEBUG glance.api.policy > [req-22abdcf2-14cd-4680-8deb-e48902a7ddef - - - - -] in enforce(), > action='get_images', policyvalues='{'service_roles': [], 'user_id': > None, 'roles': [], 'user_domain_id': None, 'service_project_id': None, > 'service_user_id': None, 'service_user_domain_id': None, > 'service_project_domain_id': None, 'is_admin_project': True, 'user': > None, 'project_id': None, 'tenant': None, 'project_domain_id': None}' > enforce /usr/lib/python2.7/site-packages/glance/api/policy.py:64 > > The takeaway that I have is that in the case of get_images, > is_admin_project is True, which is WRONG for that test but since it’s a > read-only operation it’s content to shortcircuit and return all those > images. > > In the case of publicize_image, the is_admin_project being True isn’t > enough, and when it checks user (which is None) it says NOPE. > > So somehow for some reason glance APIs context is super duper wrong. > > Mike Moore, M.S.S.E. > > Systems Engineer, Goddard Private Cloud > > Michael.D.Moore at nasa.gov > > ** > > Hydrogen fusion brightens my day. > >   > From skaplons at redhat.com Mon Oct 29 15:39:23 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 29 Oct 2018 16:39:23 +0100 Subject: [Openstack-operators] [neutron] Automatically allow incoming DHCP traffic for networks which uses external dhcp server Message-ID: <8CAA5D74-4518-4CDB-B68A-9640CD19F0FB@redhat.com> Hi, Some time ago we had in Neutron reported RFE to allow automatically incoming DHCP traffic to the VM [1]. Basically it can be done today by adding proper security group rule that will allow such incoming traffic to the VM but idea of this RFE was to add some flag, called e.g. „external_dhcp” to network/subnet and in case if this flag is set to True, add such firewall rule by default for each port. This small RFE don’t cover cases like „how to ensure that external DHCP server will be aware of IP addresses assigned for port in Neutron’s DB” or things like that. It’s only about adding this one new flag to subnet (or network) attributes instead of doing it „manually” with security groups. And now question to You is: would You be interested in such new „feature”? Currently we had only one request and we are not sure if it is worth to implement it. But if there would be more interest in it we can revive this RFE. [1] https://bugs.launchpad.net/neutron/+bug/1785213 — Slawek Kaplonski Senior software engineer Red Hat From fungi at yuggoth.org Mon Oct 29 16:53:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 29 Oct 2018 16:53:47 +0000 Subject: [Openstack-operators] [all] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> Message-ID: <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this is being sent) will be replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list is open for subscriptions[0] now, but is not yet accepting posts until Monday November 19 and it's strongly recommended to subscribe before that date so as not to miss any messages posted there. The old lists will be configured to no longer accept posts starting on Monday December 3, but in the interim posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them any time after the 19th and not miss any messages. See my previous notice[1] for details. For those wondering, we have 127 subscribers so far on openstack-discuss with 3 weeks to go before it will be put into use (and 5 weeks now before the old lists are closed down for good). [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mrhillsman at gmail.com Mon Oct 29 17:08:54 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 29 Oct 2018 12:08:54 -0500 Subject: [Openstack-operators] Fwd: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: ---------- Forwarded message --------- From: Samuel Cassiba Date: Fri, Sep 21, 2018 at 12:15 AM Subject: Re: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) To: openstack-dev On Thu, Sep 20, 2018 at 2:48 PM Doug Hellmann wrote: > > Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: > > tl;dr: The openstack, openstack-dev, openstack-sigs and > > openstack-operators mailing lists (to which this is being sent) will > > be replaced by a new openstack-discuss at lists.openstack.org mailing > > list. > > Since last week there was some discussion of including the openstack-tc > mailing list among these lists to eliminate confusion caused by the fact > that the list is not configured to accept messages from all subscribers > (it's meant to be used for us to make sure TC members see meeting > announcements). > > I'm inclined to include it and either use a direct mailing or the > [tc] tag on the new discuss list to reach TC members, but I would > like to hear feedback from TC members and other interested parties > before calling that decision made. Please let me know what you think. > > Doug > +1 including the TC list as a tag makes sense to me and my tangent about intent in online communities. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Oct 29 17:28:50 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 29 Oct 2018 12:28:50 -0500 Subject: [Openstack-operators] [user-committee] UC Meeting Reminder Message-ID: UC meeting in #openstack-uc at 1800UTC -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Oct 30 05:40:25 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 30 Oct 2018 16:40:25 +1100 Subject: [Openstack-operators] [all]Naming the T release of OpenStack -- Poll open Message-ID: <20181030054024.GC2343@thor.bakeyournoodle.com> Hi folks, It is time again to cast your vote for the naming of the T Release. As with last time we'll use a public polling option over per user private URLs for voting. This means, everybody should proceed to use the following URL to cast their vote: https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df&akey=b9e448b340787f0e We've selected a public poll to ensure that the whole community, not just gerrit change owners get a vote. Also the size of our community has grown such that we can overwhelm CIVS if using private urls. A public can mean that users behind NAT, proxy servers or firewalls may receive an message saying that your vote has already been lodged, if this happens please try another IP. Because this is a public poll, results will currently be only viewable by myself until the poll closes. Once closed, I'll post the URL making the results viewable to everybody. This was done to avoid everybody seeing the results while the public poll is running. The poll will officially end on 2018-11-08 00:00:00+00:00[1], and results will be posted shortly after. [1] https://governance.openstack.org/tc/reference/release-naming.html --- According to the Release Naming Process, this poll is to determine the community preferences for the name of the T release of OpenStack. It is possible that the top choice is not viable for legal reasons, so the second or later community preference could wind up being the name. Release Name Criteria --------------------- Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of "Austin". After "Z", the next name should start with "A" again. The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable. The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process. The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Names which do not meet these criteria but otherwise sound really cool should be added to a separate section of the wiki page and the TC may make an exception for one or more of them to be considered in the Condorcet poll. The naming official is responsible for presenting the list of exceptional names for consideration to the TC before the poll opens. Exact Geographic Region ----------------------- The Geographic Region from where names for the S release will come is Colorado Proposed Names -------------- * Tarryall * Teakettle * Teller * Telluride * Thomas : the Tank Engine * Thornton * Tiger * Tincup * Timnath * Timber * Tiny Town * Torreys * Trail * Trinidad * Treasure * Troublesome * Trussville * Turret * Tyrone Proposed Names that do not meet the criteria (accepted by the TC) ----------------------------------------------------------------- * Train🚂 : Many Attendees of the first Denver PTG have a story to tell about the trains near the PTG hotel. We could celebrate those stories with this name Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Tue Oct 30 11:15:21 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 30 Oct 2018 19:15:21 +0800 Subject: [Openstack-operators] [openstack-sigs][all] Berlin Forum for `expose SIGs and WGs` Message-ID: Hi all To continue our discussion in Denver, we will have a forum [1] in Berlin on *Wednesday, November 14, 11:50am-12:30pm CityCube Berlin - Level 3 - M-Räume 8* We will host the forum in an open discussion format, and try to get actions from forum to make sure we can keep push what people need. So if you have any feedback or idea, please join us. I created an etherpad for this forum so we can collect information, get feedback, and mark actions. *https://etherpad.openstack.org/p/expose-sigs-and-wgs * *For who don't know what is `expose SIGs and WGs`* There is some started discussion in ML [2] , and in PTG session [3]. The basic concept for this is to allow users/ops get a single window for important scenario/user cases or issues into traceable tasks in single story/place and ask developers be responsible (by changing the mission of government policy) to co-work on that task. SIGs/WGs are so desired to get feedbacks or use cases, so as for project teams (not gonna speak for all projects/SIGs/WGs but we like to collect for more idea for sure). And project teams got a central place to develop for specific user requirements, or give document for more general OpenStack information. So would like to have more discussion on how can we reach the goal by actions? How can we change in TC, UC, Projects, SIGs, WGs's policy to bridge up from user/ops to developers. [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22750/expose-sigs-and-wgs [2] http://lists.openstack.org/pipermail/openstack-sigs/2018-August/000453.html [3] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134689.html -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Oct 30 17:58:13 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 30 Oct 2018 13:58:13 -0400 Subject: [Openstack-operators] Ops Meetups team meeting 2018-10-30 Message-ID: Brief meeting today on #openstack-operators, minutes below. If you are attending Berlin, please start contributing to the Forum by selecting sesions of interest and then adding to the etherpads (see https://wiki.openstack.org/wiki/Forum/Berlin2018). I hear there's going to be a really great one about ceph, for example. Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.txt Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.log.html Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Oct 31 01:01:31 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 31 Oct 2018 12:01:31 +1100 Subject: [Openstack-operators] [openstack-dev] [all]Naming the T release of OpenStack -- Poll open In-Reply-To: References: <20181030054024.GC2343@thor.bakeyournoodle.com> Message-ID: <20181031010130.GE2343@thor.bakeyournoodle.com> On Tue, Oct 30, 2018 at 11:25:02AM -0700, iain macdonnell wrote: > I must be losing it. On what planet is "Tiny Town" a single word, and > "Troublesome" not more than 10 characters? Sorry for the mistake. Should either of these names win the popular vote clearly they would not be viable. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: