[Openstack] Keystone Unauthorized while creating/starting instance (solved)

Andrea Gatta andrea.gatta at gmail.com
Tue Mar 6 11:39:42 UTC 2018


Thanks Eugen. Just to let you know that after placing keystone into
debug_insecure mode I was clearly able to see that it was a credentials
issue in the end

2018-03-06 10:20:15.498 2094 WARNING keystone.common.wsgi
[req-577f666f-ca7b-4b17-8a0a-e9d012bb60e0 - - - - -] Authorization
failed. Invalid
username or password (Disable insecure_debug mode to suppress these
details.) from 10.0.0.31

At first I thought it was nova user at the root cause of the issue but as
you mentioned neutron was definitely more relevant to the issue at hand. So
it turns out that the keystone_authtoken section of nova.conf on the
compute node had a typo in the password field ;(

As they say all is well what ends well...that typo has costed me hours in
front of the console but I learned a lot into the process so it's ok.

Thanks for you help on this one.

Regards
Andrea

On Tue, Mar 6, 2018 at 9:54 AM, <openstack-request at lists.openstack.org>
wrote:

> Send Openstack mailing list submissions to
>         openstack at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> or, via email, send a message with subject or body 'help' to
>         openstack-request at lists.openstack.org
>
> You can reach the person managing the list at
>         openstack-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Openstack digest..."
>
>
> Today's Topics:
>
>    1. Re: Openstack data replication (John Dickinson)
>    2. Release Naming for S - time to suggest a name! (Paul Belanger)
>    3. Re: Openstack data replication (aRaviNd)
>    4. Swift Implementation on VMWare environment (aRaviNd)
>    5. Compute Node not mounting disk to VM's (Yedhu Sastry)
>    6. Re: [User-committee] [Openstack-operators] User   Committee
>       Elections (Shilla Saebi)
>    7. cinder-volume can not live migration (Cheung 楊禮銓)
>    8. User Committee Election Results - February 2018 (Shilla Saebi)
>    9. Re: [User-committee] User Committee Election Results -
>       February 2018 (Edgar Magana)
>   10. Re: [Openstack-operators] User Committee Election Results -
>       February 2018 (Jimmy McArthur)
>   11. Instances lost connectivity with metadata service.
>       (Jorge Luiz Correa)
>   12. Re: Instances lost connectivity with metadata service.
>       (Itxaka Serrano Garcia)
>   13. Re: Instances lost connectivity with metadata service.
>       (Tobias Urdin)
>   14. Re: Instances lost connectivity with metadata service.
>       (Paras pradhan)
>   15. Re: Compute Node not mounting disk to VM's (Eugen Block)
>   16. Re: Compute Node not mounting disk to VM's (Steven Relf)
>   17.  OpenStack Queens for Ubuntu 16.04 LTS (Corey Bryant)
>   18. Migration of attached cinder volumes fails. (Torin Woltjer)
>   19. Can't start instance - "Instance failed network setup after 1
>       attempt(s)/No valid host was found. There are not enough hosts
>       available" (Andrea Gatta)
>   20. Re: Can't start instance - "Instance failed network setup
>       after 1 attempt(s)/No valid host was found. There are not enough
>       hosts available" (Eugen Block)
>   21. [nova] using nova.scheduler.HostManager() (newbie question) (Ed -)
>   22. Re: [openstack-dev] Release Naming for S - time to        suggest a
>       name! (Paul Belanger)
>   23. Keystone Unauthorized: The request you have made requires
>       authentication while creating/starting instance (Andrea Gatta)
>   24. Fwd: [Openstack-sigs] [forum] Brainstorming Topics for
>       Vancouver 2018 (Melvin Hillsman)
>   25. Re: Keystone Unauthorized: The request you have made requires
>       authentication while creating/starting instance (Eugen Block)
>   26. Some questions about "Cinder Multi-Attach" in Openstack
>       Queens (谭 明宵)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 20 Feb 2018 10:23:08 -0800
> From: "John Dickinson" <me at not.mn>
> To: aRaviNd <ambadiaravind at gmail.com>
> Cc: Openstack <openstack at lists.openstack.org>
> Subject: Re: [Openstack] Openstack data replication
> Message-ID: <C6738859-8D3F-4520-90B1-1D720649843B at not.mn>
> Content-Type: text/plain; charset="utf-8"
>
> For example, you can have 3 replicas stored in a global cluster and get
> dispersion across multiple geographic regions. But it's all one logical
> cluster.
>
> With container sync, you've got separate clusters with their own
> durability characteristics. So you would have eg 3 replicas in each
> cluster, meaning 6x in the data that is synced between two clusters.
>
> --John
>
>
>
> On 18 Feb 2018, at 22:11, aRaviNd wrote:
>
> > Thanks John.
> >
> > You mentioned sync process in global clusters is more efficient. Could
> you
> > please let me know how sync process is more efficient in global clusters
> > than container sync?
> >
> > Aravind
> >
> > On Wed, Feb 14, 2018 at 9:10 PM, John Dickinson <me at not.mn> wrote:
> >
> >> A global cluster is one logical cluster that durably stores data across
> >> all the available failure domains (the highest level of failure domain
> is
> >> "region"). For example, if you have 2 regions (ie DCs)and you're using 4
> >> replicas, you'll end up with 2 replicas in each.
> >>
> >> Container sync is for taking a subset of data stored in one Swift
> cluster
> >> and synchronizing it with a different Swift cluster. Each Swift cluster
> is
> >> autonomous and handles it's own durability. So, eg if each Swift cluster
> >> uses 3 replicas, you'll end up with 6x total storage for the data that
> is
> >> synced.
> >>
> >> In most cases, people use global clusters and are happy with it. It's
> >> definitely been more used than container sync, and the sync process in
> >> global clusters is more efficient.
> >>
> >> However, deploying a multi-region Swift cluster comes with an extra set
> of
> >> challenges above and beyond a single-site deployment. You've got to
> >> consider more things with your inter-region networking, your network
> >> routing, the access patterns in each region, your requirements around
> >> locality, and the data placement of your data.
> >>
> >> All of these challenges are solvable, of course. Start with
> >> https://swift.openstack.org and also feel free to ask here on the
> mailing
> >> list or on freenode IRC in #openstack-swift.
> >>
> >> Good luck!
> >>
> >> John
> >>
> >>
> >> On 14 Feb 2018, at 6:55, aRaviNd wrote:
> >>
> >> Hi All,
> >>
> >> Whats the difference between container sync and global cluster? Which
> >> should we use for large data set of 100 Tb ?
> >>
> >> Aravind
> >>
> >> On Feb 13, 2018 7:52 PM, "aRaviNd" <ambadiaravind at gmail.com> wrote:
> >>
> >> Hi All,
> >>
> >> We are working on implementing Openstack swift replication and would
> like
> >> to know whats the better approach, container sync or global cluster, on
> >> what scenario we should choose one above the another.
> >>
> >> Swift cluster will be used as a backend for web application deployed on
> >> multiple regions which is configured as active passive using DNS.
> >>
> >> Data usage can grow upto 100TB starting with 1TB. What will be better
> >> option to sync data between regions?
> >>
> >> Thank You
> >>
> >> Aravind M D
> >>
> >>
> >> _______________________________________________
> >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> >> openstack
> >> Post to : openstack at lists.openstack.org
> >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> >> openstack
> >>
> >>
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180220/74bcc478/attachment-0001.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 801 bytes
> Desc: OpenPGP digital signature
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180220/74bcc478/attachment-0001.sig>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 20 Feb 2018 20:19:59 -0500
> From: Paul Belanger <pabelanger at redhat.com>
> To: openstack at lists.openstack.org
> Cc: openstack-dev at lists.openstack.org
> Subject: [Openstack] Release Naming for S - time to suggest a name!
> Message-ID: <20180221011959.GA30957 at localhost.localdomain>
> Content-Type: text/plain; charset=us-ascii
>
> Hey everybody,
>
> Once again, it is time for us to pick a name for our "S" release.
>
> Since the associated Summit will be in Berlin, the Geographic
> Location has been chosen as "Berlin" (State).
>
> Nominations are now open. Please add suitable names to
> https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now
> and 2018-03-05 23:59 UTC.
>
> In case you don't remember the rules:
>
> * Each release name must start with the letter of the ISO basic Latin
> alphabet following the initial letter of the previous release, starting
> with the initial release of "Austin". After "Z", the next name should
> start with "A" again.
>
> * The name must be composed only of the 26 characters of the ISO basic
> Latin alphabet. Names which can be transliterated into this character
> set are also acceptable.
>
> * The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region
> under consideration must be declared before the opening of nominations,
> as part of the initiation of the selection process.
>
> * The name must be a single word with a maximum of 10 characters. Words
> that describe the feature should not be included, so "Foo City" or "Foo
> Peak" would both be eligible as "Foo".
>
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may
> make an exception for one or more of them to be considered in the
> Condorcet poll. The naming official is responsible for presenting the
> list of exceptional names for consideration to the TC before the poll
> opens.
>
> Let the naming begin.
>
> Paul
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 21 Feb 2018 22:50:58 +0530
> From: aRaviNd <ambadiaravind at gmail.com>
> To: John Dickinson <me at not.mn>
> Cc: Openstack <openstack at lists.openstack.org>
> Subject: Re: [Openstack] Openstack data replication
> Message-ID:
>         <CAFhtsc3NfMgbN8n7oS-5vCy+uQEydhbC26NmemJhX1_jGjfPnw@
> mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Thanks John.
>
> On Tue, Feb 20, 2018 at 11:53 PM, John Dickinson <me at not.mn> wrote:
>
> > For example, you can have 3 replicas stored in a global cluster and get
> > dispersion across multiple geographic regions. But it's all one logical
> > cluster.
> >
> > With container sync, you've got separate clusters with their own
> > durability characteristics. So you would have eg 3 replicas in each
> > cluster, meaning 6x in the data that is synced between two clusters.
> >
> > --John
> >
> >
> > On 18 Feb 2018, at 22:11, aRaviNd wrote:
> >
> > Thanks John.
> >
> > You mentioned sync process in global clusters is more efficient. Could
> you
> > please let me know how sync process is more efficient in global clusters
> > than container sync?
> >
> > Aravind
> >
> > On Wed, Feb 14, 2018 at 9:10 PM, John Dickinson <me at not.mn> wrote:
> >
> >> A global cluster is one logical cluster that durably stores data across
> >> all the available failure domains (the highest level of failure domain
> is
> >> "region"). For example, if you have 2 regions (ie DCs)and you're using 4
> >> replicas, you'll end up with 2 replicas in each.
> >>
> >> Container sync is for taking a subset of data stored in one Swift
> cluster
> >> and synchronizing it with a different Swift cluster. Each Swift cluster
> is
> >> autonomous and handles it's own durability. So, eg if each Swift cluster
> >> uses 3 replicas, you'll end up with 6x total storage for the data that
> is
> >> synced.
> >>
> >> In most cases, people use global clusters and are happy with it. It's
> >> definitely been more used than container sync, and the sync process in
> >> global clusters is more efficient.
> >>
> >> However, deploying a multi-region Swift cluster comes with an extra set
> >> of challenges above and beyond a single-site deployment. You've got to
> >> consider more things with your inter-region networking, your network
> >> routing, the access patterns in each region, your requirements around
> >> locality, and the data placement of your data.
> >>
> >> All of these challenges are solvable, of course. Start with
> >> https://swift.openstack.org and also feel free to ask here on the
> >> mailing list or on freenode IRC in #openstack-swift.
> >>
> >> Good luck!
> >>
> >> John
> >>
> >>
> >> On 14 Feb 2018, at 6:55, aRaviNd wrote:
> >>
> >> Hi All,
> >>
> >> Whats the difference between container sync and global cluster? Which
> >> should we use for large data set of 100 Tb ?
> >>
> >> Aravind
> >>
> >> On Feb 13, 2018 7:52 PM, "aRaviNd" <ambadiaravind at gmail.com> wrote:
> >>
> >> Hi All,
> >>
> >> We are working on implementing Openstack swift replication and would
> like
> >> to know whats the better approach, container sync or global cluster, on
> >> what scenario we should choose one above the another.
> >>
> >> Swift cluster will be used as a backend for web application deployed on
> >> multiple regions which is configured as active passive using DNS.
> >>
> >> Data usage can grow upto 100TB starting with 1TB. What will be better
> >> option to sync data between regions?
> >>
> >> Thank You
> >>
> >> Aravind M D
> >>
> >>
> >> _______________________________________________
> >> Mailing list: http://lists.openstack.org/cgi
> >> -bin/mailman/listinfo/openstack
> >> Post to : openstack at lists.openstack.org
> >> Unsubscribe : http://lists.openstack.org/cgi
> >> -bin/mailman/listinfo/openstack
> >>
> >>
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180221/167593a3/attachment-0001.html>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 21 Feb 2018 22:53:52 +0530
> From: aRaviNd <ambadiaravind at gmail.com>
> To: Openstack <openstack at lists.openstack.org>
> Subject: [Openstack] Swift Implementation on VMWare environment
> Message-ID:
>         <CAFhtsc1kUw9j_kvYCUQtP21Eb4nJH_LW0AxmJ=O+T+9L=
> VyVCw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi All,
>
> Does anybody implemented swift cluster on production Vmware environment?
>
> if so, what will be ideal VM configuration for a PAC node and an object
> node. We are planning for a swift cluster of 100TB.
>
> Aravind
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180221/770b657b/attachment-0001.html>
>
> ------------------------------
>
> Message: 5
> Date: Thu, 22 Feb 2018 15:31:19 +0100
> From: Yedhu Sastry <yedhusastri at gmail.com>
> To: openstack at lists.openstack.org
> Subject: [Openstack] Compute Node not mounting disk to VM's
> Message-ID:
>         <CANX8mtJLZAoHC7Pfm7EhBXPBRhrVHF576Q-AkkfCVNoWCyz6kQ at mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello,
>
> I have an OpenStack cluster(Newton) which is basically a test cluster.
> After the regular OS security update and upgrade in all my compute nodes I
> have problem with New VMs. While launching new VM's Iam getting the
> Error  "ALERT!
> LABEL=cloudimg-rootfs does not exist  Dropping to a shell!" in the console
> log of VM's. In horizon it is showing as active. Iam booting from image not
> from volume. Before the update everything was fine.
>
> Then I checked all the logs related to OpenStack and I cant find any info
> related to this. I spent days and I found that after the update libvirt is
> now using scsi instead of virtio. I dont know why. All the VM's which I
> created before the update are running fine and  is using 'virtio'. Then I
> tried to manually change the instancexx.xml file of the libvirt to use "
> <target dev='vda' bus='virtio'/> " and started the VM again using 'virsh
> start instancexx'. VM got started and then went to shutdown state. But in
> the console log I can see VM is getting IP and properly booting without any
> error and then it goes to poweroff state.
>
>
> 1) Whether this issue is related to the update of libvirt?? If so why
> libvirt is not using virtio_blk anymore?? Why it is using only
> virtio_scsi?? Is it possible to change libvirt to use virtio_blk instead of
> virtio_scsi??
>
> 2) I found nova package version on compute nodes are 14.0.10 and on
> controller node it is 14.0.1. Whether this is the cause of the problem??
> Whether an update in controller node solve this issue?? Iam not sure about
> this.
>
> 3) Why Task status of  instancexx is showing as Powering Off in horizon
> after 'virsh start instancexx' in the compute node?? Why it is not starting
> the VM with the manually customized .xml file of libvirt??
>
>
> Any help is really appreciated.
>
>
> --
>
> Thank you for your time and have a nice day,
>
>
> With kind regards,
> Yedhu Sastri
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180222/8ba712bc/attachment-0001.html>
>
> ------------------------------
>
> Message: 6
> Date: Thu, 22 Feb 2018 14:40:05 -0500
> From: Shilla Saebi <shilla.saebi at gmail.com>
> To: Arkady.Kanevsky at dell.com
> Cc: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>, OpenStack Operators
>         <openstack-operators at lists.openstack.org>, user-committee
>         <user-committee at lists.openstack.org>,
> openstack at lists.openstack.org,
>         community at lists.openstack.org
> Subject: Re: [Openstack] [User-committee] [Openstack-operators] User
>         Committee Elections
> Message-ID:
>         <CAPrU3jGhMw=S8u9WOmbpGtUQn5TRrzcHm=VO_
> C7L8OxWHVdHWw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Everyone,
>
> Just a friendly reminder that voting is still open! Please be sure to check
> out the candidates - https://goo.gl/x183he - and vote before February
> 25th,
> 11:59 UTC. Thanks!
>
> Shilla
>
>
> On Mon, Feb 19, 2018 at 1:38 PM, <Arkady.Kanevsky at dell.com> wrote:
>
> > I saw election email with the pointer to votes.
> >
> > See no reason for stopping it now. But extending vote for 1 more week
> > makes sense.
> >
> > Thanks,
> > Arkady
> >
> >
> >
> > *From:* Melvin Hillsman [mailto:mrhillsman at gmail.com]
> > *Sent:* Monday, February 19, 2018 11:32 AM
> > *To:* user-committee <user-committee at lists.openstack.org>; OpenStack
> > Mailing List <openstack at lists.openstack.org>; OpenStack Operators <
> > openstack-operators at lists.openstack.org>; OpenStack Dev <
> > openstack-dev at lists.openstack.org>; community at lists.openstack.org
> > *Subject:* [Openstack-operators] User Committee Elections
> >
> >
> >
> > Hi everyone,
> >
> >
> >
> > We had to push the voting back a week if you have been keeping up with
> the
> > UC elections[0]. That being said, election officials have sent out the
> poll
> > and so voting is now open! Be sure to check out the candidates -
> > https://goo.gl/x183he - and get your vote in before the poll closes.
> >
> >
> >
> > [0] https://governance.openstack.org/uc/reference/uc-election-
> feb2018.html
> >
> >
> >
> > --
> >
> > Kind regards,
> >
> > Melvin Hillsman
> >
> > mrhillsman at gmail.com
> > mobile: (832) 264-2646
> >
> > _______________________________________________
> > User-committee mailing list
> > User-committee at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180222/2535e1ca/attachment-0001.html>
>
> ------------------------------
>
> Message: 7
> Date: Fri, 23 Feb 2018 02:30:27 +0000
> From: Cheung 楊禮銓 <Cheung at ezfly.com>
> To: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
> Subject: [Openstack] cinder-volume can not live migration
> Message-ID: <1519353026.15482.2.camel at ezfly.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear:
>
> If the volume size is bigger than 50G, I can not live migration lvm volume.
>
> I am using openstack pike version.
>
> Do I miss something?
>
> [root at controller01<mailto:root at controller01>: cinder]# cinder get-pools
> +----------+--------------------------+
> | Property | Value                    |
> +----------+--------------------------+
> | name     | controller01 at lvm<mailto:controller01 at lvm>#LVM-SAS |
> +----------+--------------------------+
> +----------+--------------------------+
> | Property | Value                    |
> +----------+--------------------------+
> | name     | controller02 at lvm<mailto:controller02 at lvm>#LVM-SAS |
> +----------+--------------------------+
>
>
> [root at controller01<mailto:root at controller01>: cinder]# openstack volume
> show 1495b9e9-e56a-468b-a134-59b0a728fa00
> +--------------------------------+--------------------------------------+
> | Field                          | Value                                |
> +--------------------------------+--------------------------------------+
> | attachments                    | []                                   |
> | availability_zone              | nova                                 |
> | bootable                       | false                                |
> | consistencygroup_id            | None                                 |
> | created_at                     | 2018-02-23T02:15:14.000000           |
> | description                    |                                      |
> | encrypted                      | False                                |
> | id                             | 1495b9e9-e56a-468b-a134-59b0a728fa00 |
> | migration_status               | error                                |
> | multiattach                    | False                                |
> | name                           | windows                              |
> | os-vol-host-attr:host          | controller01 at lvm<mailto:contro
> ller01 at lvm>#LVM-SAS             |
> | os-vol-mig-status-attr:migstat | error                                |
> | os-vol-mig-status-attr:name_id | None                                 |
> | os-vol-tenant-attr:tenant_id   | 963097c754bf40c5a077f2ae89be36c3     |
> | properties                     |                                      |
> | replication_status             | None                                 |
> | size                           | 51                                   |
> | snapshot_id                    | None                                 |
> | source_volid                   | None                                 |
> | status                         | available                            |
> | type                           | LVM-SAS                              |
> | updated_at                     | 2018-02-23T02:18:47.000000           |
> | user_id                        | 5bfa4f66825a40709e44a047bd251bcb     |
> +--------------------------------+--------------------------------------+
>
>
>
>
>
>
> --
> 本電子郵件及其所有附件所含之資訊均屬機密,僅供指定之收件人使用,未經寄件人同意不得揭露、複製或散布本電子郵件。若您並非指定之收件人,請勿使用、
> 保存或揭露本電子郵件之任何部分,並請立即通知寄件人並完全刪除本電子郵件。網路通訊可能含有病毒,收件人應自行確認本郵件是否安全,
> 若因此造成損害,寄件人恕不負責。
>
> The information contained in this communication and attachment is
> confidential and is intended only for the use of the recipient to which
> this communication is addressed. Any disclosure, copying or distribution of
> this communication without the sender's consents is strictly prohibited. If
> you are not the intended recipient, please notify the sender and delete
> this communication entirely without using, retaining, or disclosing any of
> its contents. Internet communications cannot be guaranteed to be
> virus-free. The recipient is responsible for ensuring that this
> communication is virus free and the sender accepts no liability for any
> damages caused by virus transmitted by this communication.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180223/dec14225/attachment-0001.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: cinder-volume.log
> Type: text/x-log
> Size: 14883 bytes
> Desc: cinder-volume.log
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180223/dec14225/attachment-0001.bin>
>
> ------------------------------
>
> Message: 8
> Date: Sun, 25 Feb 2018 18:52:16 -0500
> From: Shilla Saebi <shilla.saebi at gmail.com>
> To: user-committee <user-committee at lists.openstack.org>,  OpenStack
>         Mailing List <openstack at lists.openstack.org>,  OpenStack Operators
>         <openstack-operators at lists.openstack.org>,  OpenStack Dev
>         <openstack-dev at lists.openstack.org>, community at lists.openstack.org
> Subject: [Openstack] User Committee Election Results - February 2018
> Message-ID:
>         <CAPrU3jFdC16MKeF5hiwyyb8A-0iv_R44uc0RkWYmdD36Ef=ahQ@
> mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello Everyone!
>
> Please join me in congratulating 3 newly elected members of the User
> Committee (UC)! The winners for the 3 seats are:
>
> Melvin Hillsman
> Amy Marrich
> Yih Leong Sun
>
> Full results can be found here:
> https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045
>
> Election details can also be found here:
> https://governance.openstack.org/uc/reference/uc-election-feb2018.html
>
> Thank you to all of the candidates, and to all of you who voted and/or
> promoted the election!
>
> Shilla
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180225/e9e0dfbc/attachment-0001.html>
>
> ------------------------------
>
> Message: 9
> Date: Mon, 26 Feb 2018 03:38:54 +0000
> From: Edgar Magana <edgar.magana at workday.com>
> To: Shilla Saebi <shilla.saebi at gmail.com>
> Cc: OpenStack Operators <openstack-operators at lists.openstack.org>,
>         OpenStack Mailing List <openstack at lists.openstack.org>, OpenStack
> Dev
>         <openstack-dev at lists.openstack.org>, user-committee
>         <user-committee at lists.openstack.org>, "
> community at lists.openstack.org"
>         <community at lists.openstack.org>
> Subject: Re: [Openstack] [User-committee] User Committee Election
>         Results -       February 2018
> Message-ID: <876B0B60-ADB0-4CE4-B1FC-5110622D08BE at workday.com>
> Content-Type: text/plain; charset="utf-8"
>
> Congratulations Folks! We have a great team to continue the growing of the
> UC. Your first action is to assign a chair for the UC and let the board of
> directors about your election.
>
> I wish you all the best!
>
> Edgar Magana
>
>
> On Feb 25, 2018, at 3:53 PM, Shilla Saebi <shilla.saebi at gmail.com<mailto:
> shilla.saebi at gmail.com>> wrote:
>
> Hello Everyone!
>
> Please join me in congratulating 3 newly elected members of the User
> Committee (UC)! The winners for the 3 seats are:
>
> Melvin Hillsman
> Amy Marrich
> Yih Leong Sun
>
> Full results can be found here: https://civs.cs.cornell.edu/
> cgi-bin/results.pl?id=E_f7b17dc638013045<https://
> urldefense.proofpoint.com/v2/url?u=https-3A__civs.cs.
> cornell.edu_cgi-2Dbin_results.pl-3Fid-3DE-5Ff7b17dc638013045&d=DwMFaQ&c=
> DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=uryEDva3eeLA17jjrm73DWw4CrzTez
> r7HxiJNWpJAs0&s=JSlXF2Cz8d7IWVCAZQinqDxY3oHdqtJCPBPFaD0A_BA&e=>
>
> Election details can also be found here: https://governance.openstack.
> org/uc/reference/uc-election-feb2018.html<https://
> urldefense.proofpoint.com/v2/url?u=https-3A__governance.
> openstack.org_uc_reference_uc-2Delection-2Dfeb2018.html&d=
> DwMFaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=uryEDva3eeLA17jjrm73DWw4CrzTez
> r7HxiJNWpJAs0&s=nLOp6FdWRQJDjROxQPhN9SCbBK8e75tivHZUcXwOWcI&e=>
>
> Thank you to all of the candidates, and to all of you who voted and/or
> promoted the election!
>
> Shilla
> _______________________________________________
> User-committee mailing list
> User-committee at lists.openstack.org<mailto:User-
> committee at lists.openstack.org>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.
> openstack.org_cgi-2Dbin_mailman_listinfo_user-2Dcommittee&d=DwIGaQ&c=
> DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=uryEDva3eeLA17jjrm73DWw4CrzTez
> r7HxiJNWpJAs0&s=9y-_pHwzl3ADBVlN7GbhaF8HYVQGvTQjkEvEotC9jfw&e=
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180226/9f948197/attachment-0001.html>
>
> ------------------------------
>
> Message: 10
> Date: Mon, 26 Feb 2018 09:40:57 +0000
> From: Jimmy McArthur <jimmy at openstack.org>
> To: Shilla Saebi <shilla.saebi at gmail.com>
> Cc: OpenStack Operators <openstack-operators at lists.openstack.org>,
>         OpenStack Mailing List <openstack at lists.openstack.org>, OpenStack
> Dev
>         <openstack-dev at lists.openstack.org>, user-committee
>         <user-committee at lists.openstack.org>,
> community at lists.openstack.org
> Subject: Re: [Openstack] [Openstack-operators] User Committee Election
>         Results - February 2018
> Message-ID: <5A93D629.2000704 at openstack.org>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
> Congrats everyone! And thanks to the UC Election Committee for managing :)
>
> Cheers,
> Jimmy
>
> > Shilla Saebi <mailto:shilla.saebi at gmail.com>
> > February 25, 2018 at 11:52 PM
> > Hello Everyone!
> >
> > Please join me in congratulating 3 newly elected members of the User
> > Committee (UC)! The winners for the 3 seats are:
> >
> > Melvin Hillsman
> > Amy Marrich
> > Yih Leong Sun
> >
> > Full results can be found here:
> > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045
> >
> > Election details can also be found here:
> > https://governance.openstack.org/uc/reference/uc-election-feb2018.html
> >
> > Thank you to all of the candidates, and to all of you who voted and/or
> > promoted the election!
> >
> > Shilla
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180226/65fb03db/attachment-0001.html>
>
> ------------------------------
>
> Message: 11
> Date: Mon, 26 Feb 2018 08:53:22 -0300
> From: Jorge Luiz Correa <correajl at gmail.com>
> To: openstack <openstack at lists.openstack.org>
> Subject: [Openstack] Instances lost connectivity with metadata
>         service.
> Message-ID:
>         <CAE2bT_04m2COtrDuEVAKuAMzh+JqEY4RgTj9pJqZ_NSG+2jURA at mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> I would like some help to identify (and correct) a problem with instances
> metadata during booting. My environment is a Mitaka instalation, under
> Ubuntu 16.04 LTS, with 1 controller, 1 network node and 5 compute nodes.
> I'm using classic OVS as network setup.
>
> The problem ocurs after some period of time in some projects (not all
> projects at same time). When booting a Ubuntu Cloud Image with cloud-init,
> instances lost conection with API metadata and doesn't get their
> information like key-pairs and cloud-init scripts.
>
> [  118.924311] cloud-init[932]: 2018-02-23 18:27:05,003 -
> url_helper.py[WARNING]: Calling '
> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> [101/120s]:
> request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max
> retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by
> ConnectTimeoutError(<requests.packages.urllib3.connection.HTTPConnection
> object at 0x7faabcd6fa58>, 'Connection to 169.254.169.254 timed out.
> (connect timeout=50.0)'))]
> [  136.959361] cloud-init[932]: 2018-02-23 18:27:23,038 -
> url_helper.py[WARNING]: Calling '
> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> [119/120s]:
> request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max
> retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by
> ConnectTimeoutError(<requests.packages.urllib3.connection.HTTPConnection
> object at 0x7faabcd7f240>, 'Connection to 169.254.169.254 timed out.
> (connect timeout=17.0)'))]
> [  137.967469] cloud-init[932]: 2018-02-23 18:27:24,040 -
> DataSourceEc2.py[CRITICAL]: Giving up on md from ['
> http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120
> seconds
> [  137.972226] cloud-init[932]: 2018-02-23 18:27:24,048 -
> url_helper.py[WARNING]: Calling '
> http://192.168.0.7/latest/meta-data/instance-id' failed [0/120s]: request
> error [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries
> exceeded with url: /latest/meta-data/instance-id (Caused by
> NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection
> object at 0x7faabcd7fc18>: Failed to establish a new connection: [Errno
> 111] Connection refused',))]
> [  138.974223] cloud-init[932]: 2018-02-23 18:27:25,053 -
> url_helper.py[WARNING]: Calling '
> http://192.168.0.7/latest/meta-data/instance-id' failed [1/120s]: request
> error [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries
> exceeded with url: /latest/meta-data/instance-id (Caused by
> NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection
> object at 0x7faabcd7fa58>: Failed to establish a new connection: [Errno
> 111] Connection refused',))]
>
> After give up 169.254.169.254 it tries 192.168.0.7 that is the dhcp address
> for the project.
>
> I've checked that neutron-l3-agent is running, without errors. On compute
> node where VM is running, agents and vswitch is running. I could check the
> namespace of a problematic project and saw an iptables rules redirecting
> traffic from 169.254.169.254:80 to 0.0.0.0:9697, and there is a process
> neutron-ns-medata_proxy_ID  that opens that port. So, it look like the
> metadata-proxy is running fine. But, as we can see in logs there is a
> timeout.
>
> If I restart all services on network node sometimes solves the problem. In
> some cases I have to restart services on controller node (nova-api). So,
> all work fine for some time and start to have problems again.
>
> Where can I investigate to try finding the cause of the problem?
>
> I appreciate any help. Thank you!
>
> - JLC
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180226/9b2fa983/attachment-0001.html>
>
> ------------------------------
>
> Message: 12
> Date: Mon, 26 Feb 2018 13:44:37 +0100
> From: Itxaka Serrano Garcia <igarcia at suse.com>
> To: openstack at lists.openstack.org
> Subject: Re: [Openstack] Instances lost connectivity with metadata
>         service.
> Message-ID: <a0384617-2edd-d310-1c83-234880bde814 at suse.com>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
> Hi!
>
>
> On 26/02/18 12:53, Jorge Luiz Correa wrote:
> > I would like some help to identify (and correct) a problem with
> > instances metadata during booting. My environment is a Mitaka
> > instalation, under Ubuntu 16.04 LTS, with 1 controller, 1 network node
> > and 5 compute nodes. I'm using classic OVS as network setup.
> >
> > The problem ocurs after some period of time in some projects (not all
> > projects at same time). When booting a Ubuntu Cloud Image with
> > cloud-init, instances lost conection with API metadata and doesn't get
> > their information like key-pairs and cloud-init scripts.
> >
> > [  118.924311] cloud-init[932]: 2018-02-23 18:27:05,003 -
> > url_helper.py[WARNING]: Calling
> > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> > [101/120s]: request error [HTTPConnectionPool(host='169.254.169.254',
> > port=80): Max retries exceeded with url:
> > /2009-04-04/meta-data/instance-id (Caused by
> > ConnectTimeoutError(<requests.packages.urllib3.connection.HTTPConnection
> > object at 0x7faabcd6fa58>, 'Connection to 169.254.169.254 timed out.
> > (connect timeout=50.0)'))]
> > [  136.959361] cloud-init[932]: 2018-02-23 18:27:23,038 -
> > url_helper.py[WARNING]: Calling
> > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> > [119/120s]: request error [HTTPConnectionPool(host='169.254.169.254',
> > port=80): Max retries exceeded with url:
> > /2009-04-04/meta-data/instance-id (Caused by
> > ConnectTimeoutError(<requests.packages.urllib3.connection.HTTPConnection
> > object at 0x7faabcd7f240>, 'Connection to 169.254.169.254 timed out.
> > (connect timeout=17.0)'))]
> > [  137.967469] cloud-init[932]: 2018-02-23 18:27:24,040 -
> > DataSourceEc2.py[CRITICAL]: Giving up on md from
> > ['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120
> > seconds
> > [  137.972226] cloud-init[932]: 2018-02-23 18:27:24,048 -
> > url_helper.py[WARNING]: Calling
> > 'http://192.168.0.7/latest/meta-data/instance-id' failed [0/120s]:
> > request error [HTTPConnectionPool(host='192.168.0.7', port=80): Max
> > retries exceeded with url: /latest/meta-data/instance-id (Caused by
> > NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection
> > object at 0x7faabcd7fc18>: Failed to establish a new connection:
> > [Errno 111] Connection refused',))]
> > [  138.974223] cloud-init[932]: 2018-02-23 18:27:25,053 -
> > url_helper.py[WARNING]: Calling
> > 'http://192.168.0.7/latest/meta-data/instance-id' failed [1/120s]:
> > request error [HTTPConnectionPool(host='192.168.0.7', port=80): Max
> > retries exceeded with url: /latest/meta-data/instance-id (Caused by
> > NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection
> > object at 0x7faabcd7fa58>: Failed to establish a new connection:
> > [Errno 111] Connection refused',))]
> >
> > After give up 169.254.169.254 it tries 192.168.0.7 that is the dhcp
> > address for the project.
> >
> > I've checked that neutron-l3-agent is running, without errors. On
> > compute node where VM is running, agents and vswitch is running. I
> > could check the namespace of a problematic project and saw an iptables
> > rules redirecting traffic from 169.254.169.254:80
> > <http://169.254.169.254:80> to 0.0.0.0:9697 <http://0.0.0.0:9697>, and
> > there is a process neutron-ns-medata_proxy_ID  that opens that port.
> > So, it look like the metadata-proxy is running fine. But, as we can
> > see in logs there is a timeout.
> >
>
> Did you check if port 80 is listening inside the dhcp namespace with "ip
> netns exec NAMESPACE netstat -punta" ?
>
> We recently hit something similar in which the ns-proxy was up and the
> metadata-agent as well but the port 80 was missing inside the namespace,
> a restart fixed it but there was no logs of a failure anywhere so it may
> be similar.
>
> > If I restart all services on network node sometimes solves the
> > problem. In some cases I have to restart services on controller node
> > (nova-api). So, all work fine for some time and start to have problems
> > again.
> >
> > Where can I investigate to try finding the cause of the problem?
> >
> > I appreciate any help. Thank you!
> >
> > - JLC
> >
> >
> > _______________________________________________
> > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180226/0205e146/attachment-0001.html>
>
> ------------------------------
>
> Message: 13
> Date: Tue, 27 Feb 2018 08:43:35 +0000
> From: Tobias Urdin <tobias.urdin at crystone.com>
> To: Jorge Luiz Correa <correajl at gmail.com>
> Cc: openstack <openstack at lists.openstack.org>
> Subject: Re: [Openstack] Instances lost connectivity with metadata
>         service.
> Message-ID: <c990783e99e046039ab999effa73c0c1 at mb01.staff.ognet.se>
> Content-Type: text/plain; charset="utf-8"
>
> Did some troubleshooting on this myself just some days ago.
>
> You want to check out the neutron-metadata-agent log in
> /var/log/neutron/neutron-metadata-agent.log
>
> neutron-metadata-agent in turn connects to your nova keystone endpoint to
> talk to nova metadata api (nova api port 8775) to get instance information.
>
>
> I had a issue with connectivity between neutron-metadata-agent and nova
> metadata api causing the issue for me.
>
> Should probably check the nova metadata api logs as well.
>
>
> Best regards
>
> On 02/26/2018 01:00 PM, Jorge Luiz Correa wrote:
> I would like some help to identify (and correct) a problem with instances
> metadata during booting. My environment is a Mitaka instalation, under
> Ubuntu 16.04 LTS, with 1 controller, 1 network node and 5 compute nodes.
> I'm using classic OVS as network setup.
>
> The problem ocurs after some period of time in some projects (not all
> projects at same time). When booting a Ubuntu Cloud Image with cloud-init,
> instances lost conection with API metadata and doesn't get their
> information like key-pairs and cloud-init scripts.
>
> [  118.924311] cloud-init[932]: 2018-02-23 18:27:05,003 -
> url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-
> 04-04/meta-data/instance-id' failed [101/120s]: request error
> [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries
> exceeded with url: /2009-04-04/meta-data/instance-id (Caused by
> ConnectTimeoutError(<requests.packages.urllib3.connection.HTTPConnection
> object at 0x7faabcd6fa58>, 'Connection to 169.254.169.254 timed out.
> (connect timeout=50.0)'))]
> [  136.959361] cloud-init[932]: 2018-02-23 18:27:23,038 -
> url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-
> 04-04/meta-data/instance-id' failed [119/120s]: request error
> [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries
> exceeded with url: /2009-04-04/meta-data/instance-id (Caused by
> ConnectTimeoutError(<requests.packages.urllib3.connection.HTTPConnection
> object at 0x7faabcd7f240>, 'Connection to 169.254.169.254 timed out.
> (connect timeout=17.0)'))]
> [  137.967469] cloud-init[932]: 2018-02-23 18:27:24,040 -
> DataSourceEc2.py[CRITICAL]: Giving up on md from ['
> http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120
> seconds
> [  137.972226] cloud-init[932]: 2018-02-23 18:27:24,048 -
> url_helper.py[WARNING]: Calling 'http://192.168.0.7/latest/
> meta-data/instance-id' failed [0/120s]: request error
> [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries exceeded
> with url: /latest/meta-data/instance-id (Caused by
> NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection
> object at 0x7faabcd7fc18>: Failed to establish a new connection: [Errno
> 111] Connection refused',))]
> [  138.974223] cloud-init[932]: 2018-02-23 18:27:25,053 -
> url_helper.py[WARNING]: Calling 'http://192.168.0.7/latest/
> meta-data/instance-id' failed [1/120s]: request error
> [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries exceeded
> with url: /latest/meta-data/instance-id (Caused by
> NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection
> object at 0x7faabcd7fa58>: Failed to establish a new connection: [Errno
> 111] Connection refused',))]
>
> After give up 169.254.169.254 it tries 192.168.0.7 that is the dhcp
> address for the project.
>
> I've checked that neutron-l3-agent is running, without errors. On compute
> node where VM is running, agents and vswitch is running. I could check the
> namespace of a problematic project and saw an iptables rules redirecting
> traffic from 169.254.169.254:80<http://169.254.169.254:80> to 0.0.0.0:9697
> <http://0.0.0.0:9697>, and there is a process neutron-ns-medata_proxy_ID
> that opens that port. So, it look like the metadata-proxy is running fine.
> But, as we can see in logs there is a timeout.
>
> If I restart all services on network node sometimes solves the problem. In
> some cases I have to restart services on controller node (nova-api). So,
> all work fine for some time and start to have problems again.
>
> Where can I investigate to try finding the cause of the problem?
>
> I appreciate any help. Thank you!
>
> - JLC
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180227/a6f60cff/attachment-0001.html>
>
> ------------------------------
>
> Message: 14
> Date: Tue, 27 Feb 2018 09:26:49 -0600
> From: Paras pradhan <pradhanparas at gmail.com>
> To: Jorge Luiz Correa <correajl at gmail.com>
> Cc: openstack <openstack at lists.openstack.org>
> Subject: Re: [Openstack] Instances lost connectivity with metadata
>         service.
> Message-ID:
>         <CADyt5g=-4rr9NM-1-a-cMrkNoXQJDzO9AN=yh9v39H0g-
> rYTog at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> If this is project specifc usually I run the router-update and fixes the
> problem.
>
> /usr/bin/neutron router-update --admin-state-up False $routerid
> /usr/bin/neutron router-update --admin-state-up True $routerid
>
> On Mon, Feb 26, 2018 at 5:53 AM, Jorge Luiz Correa <correajl at gmail.com>
> wrote:
>
> > I would like some help to identify (and correct) a problem with instances
> > metadata during booting. My environment is a Mitaka instalation, under
> > Ubuntu 16.04 LTS, with 1 controller, 1 network node and 5 compute nodes.
> > I'm using classic OVS as network setup.
> >
> > The problem ocurs after some period of time in some projects (not all
> > projects at same time). When booting a Ubuntu Cloud Image with
> cloud-init,
> > instances lost conection with API metadata and doesn't get their
> > information like key-pairs and cloud-init scripts.
> >
> > [  118.924311] cloud-init[932]: 2018-02-23 18:27:05,003 -
> > url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-
> > 04-04/meta-data/instance-id' failed [101/120s]: request error
> > [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries
> > exceeded with url: /2009-04-04/meta-data/instance-id (Caused by
> > ConnectTimeoutError(<requests.packages.urllib3.connection.HTTPConnection
> > object at 0x7faabcd6fa58>, 'Connection to 169.254.169.254 timed out.
> > (connect timeout=50.0)'))]
> > [  136.959361] cloud-init[932]: 2018-02-23 18:27:23,038 -
> > url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-
> > 04-04/meta-data/instance-id' failed [119/120s]: request error
> > [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries
> > exceeded with url: /2009-04-04/meta-data/instance-id (Caused by
> > ConnectTimeoutError(<requests.packages.urllib3.connection.HTTPConnection
> > object at 0x7faabcd7f240>, 'Connection to 169.254.169.254 timed out.
> > (connect timeout=17.0)'))]
> > [  137.967469] cloud-init[932]: 2018-02-23 18:27:24,040 -
> > DataSourceEc2.py[CRITICAL]: Giving up on md from ['
> > http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120
> > seconds
> > [  137.972226] cloud-init[932]: 2018-02-23 18:27:24,048 -
> > url_helper.py[WARNING]: Calling 'http://192.168.0.7/latest/
> > meta-data/instance-id' failed [0/120s]: request error
> > [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries exceeded
> > with url: /latest/meta-data/instance-id (Caused by
> > NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection
> > object at 0x7faabcd7fc18>: Failed to establish a new connection: [Errno
> > 111] Connection refused',))]
> > [  138.974223] cloud-init[932]: 2018-02-23 18:27:25,053 -
> > url_helper.py[WARNING]: Calling 'http://192.168.0.7/latest/
> > meta-data/instance-id' failed [1/120s]: request error
> > [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries exceeded
> > with url: /latest/meta-data/instance-id (Caused by
> > NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection
> > object at 0x7faabcd7fa58>: Failed to establish a new connection: [Errno
> > 111] Connection refused',))]
> >
> > After give up 169.254.169.254 it tries 192.168.0.7 that is the dhcp
> > address for the project.
> >
> > I've checked that neutron-l3-agent is running, without errors. On compute
> > node where VM is running, agents and vswitch is running. I could check
> the
> > namespace of a problematic project and saw an iptables rules redirecting
> > traffic from 169.254.169.254:80 to 0.0.0.0:9697, and there is a process
> > neutron-ns-medata_proxy_ID  that opens that port. So, it look like the
> > metadata-proxy is running fine. But, as we can see in logs there is a
> > timeout.
> >
> > If I restart all services on network node sometimes solves the problem.
> In
> > some cases I have to restart services on controller node (nova-api). So,
> > all work fine for some time and start to have problems again.
> >
> > Where can I investigate to try finding the cause of the problem?
> >
> > I appreciate any help. Thank you!
> >
> > - JLC
> >
> > _______________________________________________
> > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> > openstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> > openstack
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180227/f8f02e8f/attachment-0001.html>
>
> ------------------------------
>
> Message: 15
> Date: Wed, 28 Feb 2018 14:45:19 +0000
> From: Eugen Block <eblock at nde.ag>
> To: openstack at lists.openstack.org
> Subject: Re: [Openstack] Compute Node not mounting disk to VM's
> Message-ID:
>         <20180228144519.Horde.IQRwtkWf6QBEm4Qm7gvDXSl at webmail.nde.ag>
> Content-Type: text/plain; charset=utf-8; format=flowed; DelSp=Yes
>
> Hi,
>
> unfortunately, I don't have an answer for you, but it seems that
> you're not alone with this. In the past 10 days or so I have read
> about very similiar issues multiple times (e.g. [1], [2]). In fact, it
> sounds like the update could be responsible for these changes.
>
> Usually, you can change the disk_bus by specifying glance image
> properties, something like this:
>
> openstack image set --property hw_scsi_model=virtio-scsi --property
> hw_disk_bus=scsi --property hw_qemu_guest_agent=yes --property
> os_require_quiesce=yes <IMAGE_ID>
>
> But I doubt any effect of this, there has to be something else telling
> libvirt to use scsi instead of virtio. I hope someone else has an idea
> where to look at since I don't have this issue and can't reproduce it.
>
> What is your output for
>
> ---cut here---
> root at compute:~ # grep -A3 virtio-blk
> /usr/lib/udev/rules.d/60-persistent-storage.rules
> # virtio-blk
> KERNEL=="vd*[!0-9]", ATTRS{serial}=="?*",
> ENV{ID_SERIAL}="$attr{serial}",
> SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL}"
> KERNEL=="vd*[0-9]", ATTRS{serial}=="?*",
> ENV{ID_SERIAL}="$attr{serial}",
> SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL}-part%n"
> ---cut here---
>
> You could also take a look into
> /etc/glance/metadefs/compute-libvirt-image.json, maybe there is
> something wrong there, but as I said, I can't really reproduce this.
>
> Good luck!
>
> [1]
> https://ask.openstack.org/en/question/112488/libvirt-not-
> allocating-cpu-and-disk-to-vms-after-the-os-update/
> [2] https://bugs.launchpad.net/nova/+bug/1560965
>
>
> Zitat von Yedhu Sastry <yedhusastri at gmail.com>:
>
> > Hello,
> >
> > I have an OpenStack cluster(Newton) which is basically a test cluster.
> > After the regular OS security update and upgrade in all my compute nodes
> I
> > have problem with New VMs. While launching new VM's Iam getting the
> > Error  "ALERT!
> > LABEL=cloudimg-rootfs does not exist  Dropping to a shell!" in the
> console
> > log of VM's. In horizon it is showing as active. Iam booting from image
> not
> > from volume. Before the update everything was fine.
> >
> > Then I checked all the logs related to OpenStack and I cant find any info
> > related to this. I spent days and I found that after the update libvirt
> is
> > now using scsi instead of virtio. I dont know why. All the VM's which I
> > created before the update are running fine and  is using 'virtio'. Then I
> > tried to manually change the instancexx.xml file of the libvirt to use "
> > <target dev='vda' bus='virtio'/> " and started the VM again using 'virsh
> > start instancexx'. VM got started and then went to shutdown state. But in
> > the console log I can see VM is getting IP and properly booting without
> any
> > error and then it goes to poweroff state.
> >
> >
> > 1) Whether this issue is related to the update of libvirt?? If so why
> > libvirt is not using virtio_blk anymore?? Why it is using only
> > virtio_scsi?? Is it possible to change libvirt to use virtio_blk instead
> of
> > virtio_scsi??
> >
> > 2) I found nova package version on compute nodes are 14.0.10 and on
> > controller node it is 14.0.1. Whether this is the cause of the problem??
> > Whether an update in controller node solve this issue?? Iam not sure
> about
> > this.
> >
> > 3) Why Task status of  instancexx is showing as Powering Off in horizon
> > after 'virsh start instancexx' in the compute node?? Why it is not
> starting
> > the VM with the manually customized .xml file of libvirt??
> >
> >
> > Any help is really appreciated.
> >
> >
> > --
> >
> > Thank you for your time and have a nice day,
> >
> >
> > With kind regards,
> > Yedhu Sastri
>
>
>
> --
> Eugen Block                             voice   : +49-40-559 51 75
> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
> Postfach 61 03 15
> D-22423 Hamburg                         e-mail  : eblock at nde.ag
>
>          Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>            Sitz und Registergericht: Hamburg, HRB 90934
>                    Vorstand: Jens-U. Mozdzen
>                     USt-IdNr. DE 814 013 983
>
>
>
>
> ------------------------------
>
> Message: 16
> Date: Wed, 28 Feb 2018 15:19:31 +0000
> From: Steven Relf <srelf at ukcloud.com>
> To: Yedhu Sastry <yedhusastri at gmail.com>,
>         "openstack at lists.openstack.org" <openstack at lists.openstack.org>
> Subject: Re: [Openstack] Compute Node not mounting disk to VM's
> Message-ID: <2DE7E082-60A9-4FA0-965B-D40F24A5F27D at ukcloud.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi
>
> With regards to this.
>
> 3) Why Task status of  instancexx is showing as Powering Off in horizon
> after 'virsh start instancexx' in the compute node?? Why it is not starting
> the VM with the manually customized .xml file of libvirt??
>
> I think by default if you power on an instance via the virsh command on a
> hypervisor whilst nova thinks the instance should be shutoff that nova will
> initiate a shutdown again, to ensure the hypervisor state and the nova
> state match.
>
> I believe it is configurable, but I’m struggling to remember where.
>
> Rgds
> Steve
>
>
> Steven Relf - Technical Authority Cloud Native Infrastructure
> srelf at ukcloud.com
> +44 7500 085 864
> www.ukcloud.com
> A8, Cody Technology Park, Ively Road, Farnborough, GU14 0LX
> Notice: This message contains information that may be privileged or
> confidential and is the property of UKCloud Ltd. It is intended only for
> the person to whom it is addressed. If you are not the intended recipient,
> you are not authorised to read, print, retain, copy, disseminate,
> distribute, or use this message or any part thereof. If you receive this
> message in error, please notify the sender immediately and delete all
> copies of this message. UKCloud reserves the right to monitor all e-mail
> communications through its networks. UKCloud Ltd is registered in England
> and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham,
> Corsham, Wiltshire SN13 0RP.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180228/3e1be964/attachment-0001.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image801413.png
> Type: image/png
> Size: 6421 bytes
> Desc: image801413.png
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180228/3e1be964/attachment-0005.png>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image645067.png
> Type: image/png
> Size: 1986 bytes
> Desc: image645067.png
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180228/3e1be964/attachment-0006.png>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image388040.png
> Type: image/png
> Size: 2017 bytes
> Desc: image388040.png
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180228/3e1be964/attachment-0007.png>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image121707.png
> Type: image/png
> Size: 2290 bytes
> Desc: image121707.png
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180228/3e1be964/attachment-0008.png>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image948791.png
> Type: image/png
> Size: 2199 bytes
> Desc: image948791.png
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180228/3e1be964/attachment-0009.png>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image150610.jpg
> Type: image/jpeg
> Size: 53320 bytes
> Desc: image150610.jpg
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180228/3e1be964/attachment-0001.jpg>
>
> ------------------------------
>
> Message: 17
> Date: Thu, 1 Mar 2018 16:27:27 -0500
> From: Corey Bryant <corey.bryant at canonical.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>,  Openstack
>         <openstack at lists.openstack.org>
> Subject: [Openstack]  OpenStack Queens for Ubuntu 16.04 LTS
> Message-ID:
>         <CADn0iZ2mMkVHVwqL5PsfWFaFCJFgd-LSa1=73RtBy90xrtS9jQ at mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi All,
>
> The Ubuntu OpenStack team at Canonical is pleased to announce the general
> availability of OpenStack Queens on Ubuntu 16.04 LTS via the Ubuntu Cloud
> Archive. Details of the Queens release can be found at:
> https://www.openstack.org/software/queens
>
> To get access to the Ubuntu Queens packages:
>
> Ubuntu 16.04 LTS
> ------------------------
>
> You can enable the Ubuntu Cloud Archive pocket for OpenStack Queens on
> Ubuntu 16.04 installations by running the following commands:
>
> sudo add-apt-repository cloud-archive:queens
> sudo apt update
>
> The Ubuntu Cloud Archive for Queens includes updates for:
>
> aodh, barbican, ceilometer, ceph (12.2.2), cinder, congress, designate,
> designate-dashboard, dpdk (17.11), glance, glusterfs (3.13.2), gnocchi,
> heat, heat-dashboard, horizon, ironic, keystone, libvirt (4.0.0), magnum,
> manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe,
> networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl,
> networking-ovn, networking-sfc, neutron, neutron-dynamic-routing,
> neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-taas,
> neutron-vpnaas, nova, nova-lxd, openstack-trove, openvswitch (2.9.0),
> panko, qemu (2.11), rabbitmq-server (3.6.10), sahara, sahara-dashboard,
> senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar.
>
> For a full list of packages and versions, please refer to [0].
>
> Branch Package Builds
> -------------------------------
> If you would like to try out the latest updates to branches, we deliver
> continuously integrated packages on each upstream commit via the following
> PPA’s:
>
>    sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
>    sudo add-apt-repository ppa:openstack-ubuntu-testing/newton
>    sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
>    sudo add-apt-repository ppa:openstack-ubuntu-testing/pike
>    sudo add-apt-repository ppa:openstack-ubuntu-testing/queens
>
> Reporting bugs
> ---------------------
> If you have any issues please report bugs using the 'ubuntu-bug' tool to
> ensure that bugs get logged in the right place in Launchpad:
>
> sudo ubuntu-bug nova-conductor
>
> Thanks to everyone who has contributed to OpenStack Queens, both upstream
> and downstream!
>
> Have fun and see you in Rocky!
>
> Regards,
> Corey
> (on behalf of the Ubuntu OpenStack team)
>
> [0]
> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-
> archive/queens_versions.html
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180301/ca3c3555/attachment-0001.html>
>
> ------------------------------
>
> Message: 18
> Date: Mon, 05 Mar 2018 13:41:11 GMT
> From: "Torin Woltjer" <torin.woltjer at granddial.com>
> To: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
> Subject: [Openstack] Migration of attached cinder volumes fails.
> Message-ID: <1794b1a4994d4035b30539100be5a59a at granddial.com>
> Content-Type: text/plain; charset="utf-8"
>
> The backend being used for all storage is ceph, with different pools for
> nova, glance, and cinder; with cinder having a separate pool for ssd and
> hdd. The goal is being able to migrate VM's from HDD backed storage to SSD
> backed storage without downtime. Migrating volumes that are not attached
> works as expected; however, when migrating a volume attached to an
> instance, the migration appears to fail. I can see the new volume created,
> and then deleted as the old volume remains. This is the log file for
> nova-compute during the migration http://paste.openstack.org/raw/691729/
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180305/83182e8a/attachment-0001.html>
>
> ------------------------------
>
> Message: 19
> Date: Mon, 5 Mar 2018 15:48:26 +0100
> From: Andrea Gatta <andrea.gatta at gmail.com>
> To: openstack at lists.openstack.org
> Subject: [Openstack] Can't start instance - "Instance failed network
>         setup after 1 attempt(s)/No valid host was found. There are not
> enough
>         hosts available"
> Message-ID:
>         <CAETPU8PCCyQfCaXo9=wsfqT5w468bz0_iCamy8dDVz+TBthrBQ at mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello guys,
> I am fairly new to Openstack and am building a home lab to experiment with
> it at my own pace.
>
> Here's my present setup:
>
> host/hypervisor: vmware workstation 10 (1 xeon 4 cores, 40 GB RAM)
> os: Centos 7
> openstack realease: Newton
>
> Architecture is fairly simple:
>
> 1x Controller node (1 vcpu, 4 GB RAM)
> 1x Compute node (1   vcpu, 4 GB RAM)
>
> After a couple of days of work I now have a working lab but am stuck not
> being able to create and start basic cirros instance.
>
> The issue has been confirmed using Horizon as well (instance creation fails
> with same errors)
>
>
> *root at controller1 nova]# openstack server create --flavor m1.nano --image
> cirros --nic net-id=cd37f4c3-7860-4183-8901-deeb48448fe4 --security-group
> default   --key-name mykey selfservice-instance*
>
> root at controller1 ~]# openstack server list
> +--------------------------------------+--------------------
> --+--------+----------+------------+
> | ID                                   | Name                 | Status |
> Networks | Image Name |
> +--------------------------------------+--------------------
> --+--------+----------+------------+
> | 2a100590-6d7c-4d04-aecb-9dc2011252f5 | selfservice-instance | ERROR  |
>       | cirros     |
>
> *openstack server show selfservice-instance*
> ....
> fault                                | {u'message': u'No valid host was
> found. There are not enough hosts available.', u'code': 500
>
> *nova-scheduler.log*
>
> Filter results: ['RetryFilter: (start: 1, end: 0)']
>
> ['RetryFilter: (start: 1, end: 0)']
>
> As for the installation process I followed the openstack official
> documentation at
>
> *https://docs.openstack.org/newton/install-guide-rdo/index.html
> <https://docs.openstack.org/newton/install-guide-rdo/index.html> *
>
> After a bit of digging I've found that the instance had failed network
> setup
>
> */var/log/nova/nova-compute.log*
>
> 2018-03-05 11:35:08.939 20920 ERROR nova.compute.manager
> [req-b252833b-e6b4-43ac-8d95-5ccec002e74c e35fc188170d4144a9cd4d30f9eab65c
> bad15e4bc5714298b275e2f45ec8a6ff - - -] *Instance failed network setup
> after 1 attempt(s)*
>
> Up to this point I reviewed the whole configuration several time with a
> special focus on nova<>neutron integration but at present I haven't been
> able to figure out what is going on
>
> Rabbitmq seems to work fine and communications between controller and
> compute nodes work as expected (no logs to prove otherwise found).
>
> Just in case here's the output of 'openstack network list' in case anyone
> was wondering whether or not openstack had interfaces to play with.
>
> I am using QEMU with KVM acceleration.
>
> *[root at controller1 etc]# openstack network list*
> +--------------------------------------+-------------+------
> --------------------------------+
> | ID                                   | Name        | Subnets
>                 |
> +--------------------------------------+-------------+------
> --------------------------------+
> | 982445b2-deb9-4308-8580-9de20992c4dd | provider    |
> ccd0290f-1640-4354-b56d-1a95c8c19ec0 |
> | cd37f4c3-7860-4183-8901-deeb48448fe4 | selfservice |
> 6096dff6-4567-4666-9e10-6dd718514e86 |
> +--------------------------------------+-------------+------
> --------------------------------+
>
> Clues anyone ?
>
> Thanks in advance
>
> Cheers
> Andrea
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180305/eb8b304e/attachment-0001.html>
>
> ------------------------------
>
> Message: 20
> Date: Mon, 05 Mar 2018 15:47:41 +0000
> From: Eugen Block <eblock at nde.ag>
> To: openstack at lists.openstack.org
> Subject: Re: [Openstack] Can't start instance - "Instance failed
>         network setup after 1 attempt(s)/No valid host was found. There are
>         not enough hosts available"
> Message-ID:
>         <20180305154741.Horde.JpXYyR93IZTpdwSqgnVNsTU at webmail.nde.ag>
> Content-Type: text/plain; charset=utf-8; format=flowed; DelSp=Yes
>
> Hi,
>
> my first action would be debug mode for neutron logs and then review
> all of the logs (server, dhcp-agent, linuxbridgeagent, etc.). At least
> one of them should also report errors, maybe they point you to the
> right direction.
>
> Have you checked 'openstack network agent list'? Are all agents up?
>
> Regards
>
>
> Zitat von Andrea Gatta <andrea.gatta at gmail.com>:
>
> > Hello guys,
> > I am fairly new to Openstack and am building a home lab to experiment
> with
> > it at my own pace.
> >
> > Here's my present setup:
> >
> > host/hypervisor: vmware workstation 10 (1 xeon 4 cores, 40 GB RAM)
> > os: Centos 7
> > openstack realease: Newton
> >
> > Architecture is fairly simple:
> >
> > 1x Controller node (1 vcpu, 4 GB RAM)
> > 1x Compute node (1   vcpu, 4 GB RAM)
> >
> > After a couple of days of work I now have a working lab but am stuck not
> > being able to create and start basic cirros instance.
> >
> > The issue has been confirmed using Horizon as well (instance creation
> fails
> > with same errors)
> >
> >
> > *root at controller1 nova]# openstack server create --flavor m1.nano
> --image
> > cirros --nic net-id=cd37f4c3-7860-4183-8901-deeb48448fe4
> --security-group
> > default   --key-name mykey selfservice-instance*
> >
> > root at controller1 ~]# openstack server list
> > +--------------------------------------+--------------------
> --+--------+----------+------------+
> > | ID                                   | Name                 | Status |
> > Networks | Image Name |
> > +--------------------------------------+--------------------
> --+--------+----------+------------+
> > | 2a100590-6d7c-4d04-aecb-9dc2011252f5 | selfservice-instance | ERROR  |
> >       | cirros     |
> >
> > *openstack server show selfservice-instance*
> > ....
> > fault                                | {u'message': u'No valid host was
> > found. There are not enough hosts available.', u'code': 500
> >
> > *nova-scheduler.log*
> >
> > Filter results: ['RetryFilter: (start: 1, end: 0)']
> >
> > ['RetryFilter: (start: 1, end: 0)']
> >
> > As for the installation process I followed the openstack official
> > documentation at
> >
> > *https://docs.openstack.org/newton/install-guide-rdo/index.html
> > <https://docs.openstack.org/newton/install-guide-rdo/index.html> *
> >
> > After a bit of digging I've found that the instance had failed network
> setup
> >
> > */var/log/nova/nova-compute.log*
> >
> > 2018-03-05 11:35:08.939 20920 ERROR nova.compute.manager
> > [req-b252833b-e6b4-43ac-8d95-5ccec002e74c e35fc188170d4144a9cd4d30f9eab6
> 5c
> > bad15e4bc5714298b275e2f45ec8a6ff - - -] *Instance failed network setup
> > after 1 attempt(s)*
> >
> > Up to this point I reviewed the whole configuration several time with a
> > special focus on nova<>neutron integration but at present I haven't been
> > able to figure out what is going on
> >
> > Rabbitmq seems to work fine and communications between controller and
> > compute nodes work as expected (no logs to prove otherwise found).
> >
> > Just in case here's the output of 'openstack network list' in case anyone
> > was wondering whether or not openstack had interfaces to play with.
> >
> > I am using QEMU with KVM acceleration.
> >
> > *[root at controller1 etc]# openstack network list*
> > +--------------------------------------+-------------+------
> --------------------------------+
> > | ID                                   | Name        | Subnets
> >                 |
> > +--------------------------------------+-------------+------
> --------------------------------+
> > | 982445b2-deb9-4308-8580-9de20992c4dd | provider    |
> > ccd0290f-1640-4354-b56d-1a95c8c19ec0 |
> > | cd37f4c3-7860-4183-8901-deeb48448fe4 | selfservice |
> > 6096dff6-4567-4666-9e10-6dd718514e86 |
> > +--------------------------------------+-------------+------
> --------------------------------+
> >
> > Clues anyone ?
> >
> > Thanks in advance
> >
> > Cheers
> > Andrea
>
>
>
> --
> Eugen Block                             voice   : +49-40-559 51 75
> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
> Postfach 61 03 15
> D-22423 Hamburg                         e-mail  : eblock at nde.ag
>
>          Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>            Sitz und Registergericht: Hamburg, HRB 90934
>                    Vorstand: Jens-U. Mozdzen
>                     USt-IdNr. DE 814 013 983
>
>
>
>
> ------------------------------
>
> Message: 21
> Date: Mon, 5 Mar 2018 17:42:46 +0100
> From: Ed - <eduard.barrera at gmail.com>
> To: openstack at lists.openstack.org
> Subject: [Openstack] [nova] using nova.scheduler.HostManager() (newbie
>         question)
> Message-ID:
>         <CAMO3UziFWh0FzdAuREegpEMxXjk6y2GvpnVOo-5b1B5Ma+HZSQ at mail.
> gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi all,
>
> I'm trying to get started with openstack code. I'm using the
> nova.tests examples as they seems simple. I tried it on a packstack
> newton (it works) and on some TripleO deployments (Pike) , but it
> doesn't work :( Below you can find the code and the error:
>
> ~~~
> from oslo_config import cfg
> from oslo_context import context
> from oslo_log import log as logging
> from nova.common import config
> from nova import version
> from oslo_log import log
>
> from nova import context as nova_context
>
> from nova.scheduler import host_manager
> from nova import objects
>
> objects.register_all()
>
> CONF = cfg.CONF
> DOMAIN = "demo"
>
>
> CONF("", project='nova', version=version.version_string(),
> default_config_files=None)
> manager=host_manager.HostManager()
>
> ctx=nova_context.RequestContext()
> print  manager.get_all_host_states(ctx)
> ~~~
>
> ~~~
> 2018-03-05 16:37:47.654 628255 CRITICAL demo
> [req-43e79d3d-6032-44a8-8cfa-ef884d009cbc - - - - -] Unhandled error:
> ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table
> 'nova.cell_mappings' doesn't exist") [SQL: u'SELECT
> cell_mappings.created_at AS cell_mappings_created_at,
> cell_mappings.updated_at AS cell_mappings_updated_at, cell_mappings.id
> AS cell_mappings_id, cell_mappings.uuid AS cell_mappings_uuid,
> cell_mappings.name AS cell_mappings_name, cell_mappings.transport_url
> AS cell_mappings_transport_url, cell_mappings.database_connection AS
> cell_mappings_database_connection \nFROM cell_mappings ORDER BY
> cell_mappings.id ASC']
> 2018-03-05 16:37:47.654 628255 ERROR demo Traceback (most recent call
> last):
> 2018-03-05 16:37:47.654 628255 ERROR demo   File "context.py", line
> 34, in <module>
> 2018-03-05 16:37:47.654 628255 ERROR demo     print
> manager.get_all_host_states(ctx)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py",
> line 656, in get_all_host_states
> 2018-03-05 16:37:47.654 628255 ERROR demo     self._load_cells(context)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py",
> line 627, in _load_cells
> 2018-03-05 16:37:47.654 628255 ERROR demo     self.cells =
> objects.CellMappingList.get_all(context)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line
> 184, in wrapper
> 2018-03-05 16:37:47.654 628255 ERROR demo     result = fn(cls,
> context, *args, **kwargs)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/nova/objects/cell_mapping.py", line
> 137, in get_all
> 2018-03-05 16:37:47.654 628255 ERROR demo     db_mappings =
> cls._get_all_from_db(context)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py",
> line 979, in wrapper
> 2018-03-05 16:37:47.654 628255 ERROR demo     return fn(*args, **kwargs)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/nova/objects/cell_mapping.py", line
> 133, in _get_all_from_db
> 2018-03-05 16:37:47.654 628255 ERROR demo
> asc(api_models.CellMapping.id)).all()
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line
> 2703, in all
> 2018-03-05 16:37:47.654 628255 ERROR demo     return list(self)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line
> 2855, in __iter__
> 2018-03-05 16:37:47.654 628255 ERROR demo     return
> self._execute_and_instances(context)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line
> 2878, in _execute_and_instances
> 2018-03-05 16:37:47.654 628255 ERROR demo     result =
> conn.execute(querycontext.statement, self._params)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line
> 945, in execute
> 2018-03-05 16:37:47.654 628255 ERROR demo     return meth(self,
> multiparams, params)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line
> 263, in _execute_on_connection
> 2018-03-05 16:37:47.654 628255 ERROR demo     return
> connection._execute_clauseelement(self, multiparams, params)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line
> 1053, in _execute_clauseelement
> 2018-03-05 16:37:47.654 628255 ERROR demo     compiled_sql,
> distilled_params
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line
> 1189, in _execute_context
> 2018-03-05 16:37:47.654 628255 ERROR demo     context)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line
> 1398, in _handle_dbapi_exception
> 2018-03-05 16:37:47.654 628255 ERROR demo
> util.raise_from_cause(newraise, exc_info)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line
> 203, in raise_from_cause
> 2018-03-05 16:37:47.654 628255 ERROR demo     reraise(type(exception),
> exception, tb=exc_tb, cause=cause)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line
> 1182, in _execute_context
> 2018-03-05 16:37:47.654 628255 ERROR demo     context)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py",
> line 470, in do_execute
> 2018-03-05 16:37:47.654 628255 ERROR demo
> cursor.execute(statement, parameters)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in
> execute
> 2018-03-05 16:37:47.654 628255 ERROR demo     result = self._query(query)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in
> _query
> 2018-03-05 16:37:47.654 628255 ERROR demo     conn.query(q)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 856,
> in query
> 2018-03-05 16:37:47.654 628255 ERROR demo     self._affected_rows =
> self._read_query_result(unbuffered=unbuffered)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1057,
> in _read_query_result
> 2018-03-05 16:37:47.654 628255 ERROR demo     result.read()
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1340,
> in read
> 2018-03-05 16:37:47.654 628255 ERROR demo     first_packet =
> self.connection._read_packet()
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1014,
> in _read_packet
> 2018-03-05 16:37:47.654 628255 ERROR demo     packet.check_error()
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 393,
> in check_error
> 2018-03-05 16:37:47.654 628255 ERROR demo
> err.raise_mysql_exception(self._data)
> 2018-03-05 16:37:47.654 628255 ERROR demo   File
> "/usr/lib/python2.7/site-packages/pymysql/err.py", line 107, in
> raise_mysql_exception
> 2018-03-05 16:37:47.654 628255 ERROR demo     raise errorclass(errno,
> errval)
> 2018-03-05 16:37:47.654 628255 ERROR demo ProgrammingError:
> (pymysql.err.ProgrammingError) (1146, u"Table 'nova.cell_mappings'
> doesn't exist") [SQL: u'SELECT cell_mappings.created_at AS
> cell_mappings_created_at, cell_mappings.updated_at AS
> cell_mappings_updated_at, cell_mappings.id AS cell_mappings_id,
> cell_mappings.uuid AS cell_mappings_uuid, cell_mappings.name AS
> cell_mappings_name, cell_mappings.transport_url AS
> cell_mappings_transport_url, cell_mappings.database_connection AS
> cell_mappings_database_connection \nFROM cell_mappings ORDER BY
> cell_mappings.id ASC']
> 2018-03-05 16:37:47.654 628255 ERROR demo
> ~~~
>
> print  manager.get_all_host_states(ctx) is causing the the previous
> trace. How can I avoid getting the previous errors, what is wrong ?
>
> Thank you very much in advance.
>
>
>
> ------------------------------
>
> Message: 22
> Date: Mon, 5 Mar 2018 16:53:23 -0500
> From: Paul Belanger <pabelanger at redhat.com>
> To: openstack at lists.openstack.org
> Cc: openstack-dev at lists.openstack.org
> Subject: Re: [Openstack] [openstack-dev] Release Naming for S - time
>         to      suggest a name!
> Message-ID: <20180305215323.GA14231 at localhost.localdomain>
> Content-Type: text/plain; charset=us-ascii
>
> On Tue, Feb 20, 2018 at 08:19:59PM -0500, Paul Belanger wrote:
> > Hey everybody,
> >
> > Once again, it is time for us to pick a name for our "S" release.
> >
> > Since the associated Summit will be in Berlin, the Geographic
> > Location has been chosen as "Berlin" (State).
> >
> > Nominations are now open. Please add suitable names to
> > https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now
> > and 2018-03-05 23:59 UTC.
> >
> > In case you don't remember the rules:
> >
> > * Each release name must start with the letter of the ISO basic Latin
> > alphabet following the initial letter of the previous release, starting
> > with the initial release of "Austin". After "Z", the next name should
> > start with "A" again.
> >
> > * The name must be composed only of the 26 characters of the ISO basic
> > Latin alphabet. Names which can be transliterated into this character
> > set are also acceptable.
> >
> > * The name must refer to the physical or human geography of the region
> > encompassing the location of the OpenStack design summit for the
> > corresponding release. The exact boundaries of the geographic region
> > under consideration must be declared before the opening of nominations,
> > as part of the initiation of the selection process.
> >
> > * The name must be a single word with a maximum of 10 characters. Words
> > that describe the feature should not be included, so "Foo City" or "Foo
> > Peak" would both be eligible as "Foo".
> >
> > Names which do not meet these criteria but otherwise sound really cool
> > should be added to a separate section of the wiki page and the TC may
> > make an exception for one or more of them to be considered in the
> > Condorcet poll. The naming official is responsible for presenting the
> > list of exceptional names for consideration to the TC before the poll
> opens.
> >
> > Let the naming begin.
> >
> > Paul
> >
> Just a reminder, there is only few more hours left to get your suggestions
> in
> for the naming the next release.
>
> Thanks,
> Paul
>
>
>
> ------------------------------
>
> Message: 23
> Date: Mon, 5 Mar 2018 23:22:29 +0100
> From: Andrea Gatta <andrea.gatta at gmail.com>
> To: openstack <openstack at lists.openstack.org>
> Subject: [Openstack] Keystone Unauthorized: The request you have made
>         requires authentication while creating/starting instance
> Message-ID:
>         <CAETPU8MWrwcH5XQNo7Z0NXp1HEX21L_8P9=a-1-wu8ebSy7DLA at mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello there,
> as for the subject I am stuck trying to create/start a cirros imange.
>
> At first I didn't notice but I can now say that while creating the image
> keystone logs the following warning:
>
> /var/log/keystone/keystone.log
>
> 2018-03-05 21:02:45.961 2120 INFO keystone.common.wsgi [
> req-5c4c9e26-dbe2-429f-b414-f6262b451392 - - - - -] POST
> http://controller1:35357/v3/auth/tokens
> 2018-03-05 21:02:46.740 2120 WARNING keystone.common.wsgi
> [req-5c4c9e26-dbe2-429f-b414-f6262b451392 - - - - -] Authorization failed.
> The request you have mad
>
> at the same time nova throws the following error:
>
> /var/log/nova/nova-compute.log
> 45ec8a6ff - - -] [instance: 7a789397-8fbd-47a7-a5f6-8b274f77ca72] Creating
> image
> 2018-03-05 21:26:34.716 1225 ERROR nova.compute.manager
> [req-b9f9e984-6f5f-4869-9290-63ca145d19e1 e35fc188170d4144a9cd4d30f9eab65c
> bad15e4bc5714298b275e2f45e
> c8a6ff - - -] Instance failed network setup after 1 attempt(s)
> 2018-03-05 21:26:34.716 1225 ERROR nova.compute.manager Traceback (most
> recent call last):
> .......
>
> 2018-03-05 21:26:34.716 1225 ERROR nova.compute.manager Unauthorized: The
> request you have made requires authentication. (HTTP 401) (Request-ID:
> req-5c4c9e26-dbe2-429f-b414-f6262b451392)
> 2018-03-05 21:26:34.736 1225 ERROR nova.compute.manager [instance:
> 7a789397-8fbd-47a7-a5f6-8b274f77ca72] Unauthorized: The request you have
> made requires authentication. (HTTP 401) (Request-ID:
> req-5c4c9e26-dbe2-429f-b414-f6262b451392)
>
> So basically the compute node sends  req-5c4c9e26-dbe2-429f-b414-
> f6262b451392
> that hasn't gotten a reply since keystone on the controller node denies it
> (reqs match).
>
> To this point I've checked auth_uri and nova user password in
> /etc/nova/nova.conf for both controller and compute nodes. Moreover I've
> checked nova openstack user password with the command 'openstack user
> password set' (with appropriate env). Crendentials are ok all across the
> board.
>
> Here's the [keystone_authtoken] section for both controller and compute
> nodes
>
> [keystone_authtoken]
>
> auth_uri = http://controller1:5000
> auth_url = http://controller1:35357
> memcached_servers = controller1:11211
> auth_type = password
> project_domain_name = Default
> user_domain_name = Default
> project_name = service
> username = nova
> password = xxxxxxxx
>
>
> auth_uri = http://controller1:5000
> auth_url = http://controller1:35357
> memcached_servers = controller1:11211
> auth_type = password
> project_domain_name = Default
> user_domain_name = Default
> project_name = service
> username = nova
> password = xxxxxxxx
>
> Thanks in advance for any light you could shed on this.
>
> Regards
> Andrea
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180305/2c399d6a/attachment-0001.html>
>
> ------------------------------
>
> Message: 24
> Date: Mon, 5 Mar 2018 17:41:37 -0600
> From: Melvin Hillsman <mrhillsman at gmail.com>
> To: OpenStack Mailing List <openstack at lists.openstack.org>
> Subject: [Openstack] Fwd: [Openstack-sigs] [forum] Brainstorming
>         Topics for      Vancouver 2018
> Message-ID:
>         <CAMVtB2Hn5_-3WXAtdfuz+pUJdND5rtq-Q4c1A0N9zdOiZD-uQQ@
> mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> ---------- Forwarded message ----------
> From: Mike Perez <thingee at gmail.com>
> Date: Mon, Mar 5, 2018 at 5:15 PM
> Subject: [Openstack-sigs] [forum] Brainstorming Topics for Vancouver 2018
> To: openstack-dev at lists.openstack.org
> Cc: openstack-sigs at lists.openstack.org,
> openstack-operators at lists.openstack.org, user-committee at list.openstack.org
>
>
> Hi all,
>
> Welcome to the topic selection process for our Forum in Vancouver. Note
> that
> this is not a classic conference track with speakers and presentations.
> OpenStack community members (participants in development teams, SIGS,
> working
> groups, and other interested individuals) discuss the topics they want to
> cover
> and get alignment on and we welcome your participation.
>
> The Forum is for the entire community to come together, to create a
> neutral space rather than having separate "ops" and "dev" days. Users
> should
> should aim to come with ideas for for the next release, gather feedback on
> the
> past version and have strategic discussions that go beyond just one release
> cycle. We aim to ensure the broadest coverage of topics that will allow for
> multiple parts of the community getting together to discuss key areas
> within
> our community/projects.
>
> There are two stages to the brainstorming:
>
> 1. Starting today, set up an etherpad with your team and start
> discussing ideas you'd like to talk about at the Forum and work out
> which ones to submit - just like you did prior to the design summit.
>
> 2. Then, in a couple of weeks, we will open up a more formal web-based
> tool for you to submit abstracts for the most popular sessions that came
> out of your brainstorming.
>
> Make an etherpad and add it to the list at:
> https://wiki.openstack.org/wiki/Forum/Vancouver2018
>
> One key thing we'd like to see (as always?) is cross-project
> collaboration, and discussion between every area of the community. Try
> to see if there is an interested working group on the user side to add
> to your ideas.
>
> Examples of typical discussions that include multiple parts of the
> community getting together to discuss:
>
>   * Strategic, whole-of-community discussions, to think about the big
>     picture, including beyond just one release cycle and new technologies
>       o eg Making OpenStack One Platform for containers/VMs/Bare Metal
>         (Strategic session) the entire community congregates to share
>         opinions on how to make OpenStack achieve its integration engine
>         goal
>   * Cross-project sessions, in a similar vein to what has happened at
>     past design summits, but with increased emphasis on issues that are
>     of relevant to all areas of the community
>       o eg Rolling Upgrades at Scale (Cross-Project session) -- the
>         Large Deployments Team collaborates with Nova, Cinder and
>         Keystone to tackle issues that come up with rolling upgrades
>         when there's a large number of machines.
>   * Project-specific sessions, where developers can ask users specific
>     questions about their experience, users can provide feedback from
>     the last release and cross-community collaboration on the priorities
>     and 'blue sky' ideas for the next release.
>       o eg Neutron Pain Points (Project-Specific session) --
>         Co-organized by neutron developers and users. Neutron developers
>         bring some specific questions they want answered, Neutron users
>         bring feedback from the latest release and ideas about the future.
>
> Think about what kind of session ideas might end up as:
> Project-specific, cross-project or strategic/whole-of-community
> discussions. There'll be more slots for the latter two, so do try and
> think outside the box!
>
> This part of the process is where we gather broad community consensus -
> in theory the second part is just about fitting in as many of the good
> ideas into the schedule as we can.
>
> Further details about the forum can be found at:
> https://wiki.openstack.org/wiki/Forum
>
> --
> Mike Perez (thingee)
>
> _______________________________________________
> openstack-sigs mailing list
> openstack-sigs at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
>
>
>
>
> --
> Kind regards,
>
> Melvin Hillsman
> mrhillsman at gmail.com
> mobile: (832) 264-2646
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180305/49e8a392/attachment-0001.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 836 bytes
> Desc: not available
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180305/49e8a392/attachment-0001.sig>
>
> ------------------------------
>
> Message: 25
> Date: Tue, 06 Mar 2018 07:41:50 +0000
> From: Eugen Block <eblock at nde.ag>
> To: openstack at lists.openstack.org
> Subject: Re: [Openstack] Keystone Unauthorized: The request you have
>         made requires authentication while creating/starting instance
> Message-ID:
>         <20180306074150.Horde.nv6XKs7ZZHQDO-y1dasSGu3 at webmail.nde.ag>
> Content-Type: text/plain; charset=utf-8; format=flowed; DelSp=Yes
>
> Hi,
>
> you should also check your neutron auth configs and the respective log
> files since nova reports "Instance failed network setup after 1
> attempt(s)". Set nova and neutron to debug mode to get more output.
> You could also try to run different neutron commands with the same
> credentials and see if there occur any errors. Breaking it down to a
> specific service will help identifying the issue.
>
> Regards
>
>
> Zitat von Andrea Gatta <andrea.gatta at gmail.com>:
>
> > Hello there,
> > as for the subject I am stuck trying to create/start a cirros imange.
> >
> > At first I didn't notice but I can now say that while creating the image
> > keystone logs the following warning:
> >
> > /var/log/keystone/keystone.log
> >
> > 2018-03-05 21:02:45.961 2120 INFO keystone.common.wsgi [
> > req-5c4c9e26-dbe2-429f-b414-f6262b451392 - - - - -] POST
> > http://controller1:35357/v3/auth/tokens
> > 2018-03-05 21:02:46.740 2120 WARNING keystone.common.wsgi
> > [req-5c4c9e26-dbe2-429f-b414-f6262b451392 - - - - -] Authorization
> failed.
> > The request you have mad
> >
> > at the same time nova throws the following error:
> >
> > /var/log/nova/nova-compute.log
> > 45ec8a6ff - - -] [instance: 7a789397-8fbd-47a7-a5f6-8b274f77ca72]
> Creating
> > image
> > 2018-03-05 21:26:34.716 1225 ERROR nova.compute.manager
> > [req-b9f9e984-6f5f-4869-9290-63ca145d19e1 e35fc188170d4144a9cd4d30f9eab6
> 5c
> > bad15e4bc5714298b275e2f45e
> > c8a6ff - - -] Instance failed network setup after 1 attempt(s)
> > 2018-03-05 21:26:34.716 1225 ERROR nova.compute.manager Traceback (most
> > recent call last):
> > .......
> >
> > 2018-03-05 21:26:34.716 1225 ERROR nova.compute.manager Unauthorized: The
> > request you have made requires authentication. (HTTP 401) (Request-ID:
> > req-5c4c9e26-dbe2-429f-b414-f6262b451392)
> > 2018-03-05 21:26:34.736 1225 ERROR nova.compute.manager [instance:
> > 7a789397-8fbd-47a7-a5f6-8b274f77ca72] Unauthorized: The request you have
> > made requires authentication. (HTTP 401) (Request-ID:
> > req-5c4c9e26-dbe2-429f-b414-f6262b451392)
> >
> > So basically the compute node sends  req-5c4c9e26-dbe2-429f-b414-
> f6262b451392
> > that hasn't gotten a reply since keystone on the controller node denies
> it
> > (reqs match).
> >
> > To this point I've checked auth_uri and nova user password in
> > /etc/nova/nova.conf for both controller and compute nodes. Moreover I've
> > checked nova openstack user password with the command 'openstack user
> > password set' (with appropriate env). Crendentials are ok all across the
> > board.
> >
> > Here's the [keystone_authtoken] section for both controller and compute
> > nodes
> >
> > [keystone_authtoken]
> >
> > auth_uri = http://controller1:5000
> > auth_url = http://controller1:35357
> > memcached_servers = controller1:11211
> > auth_type = password
> > project_domain_name = Default
> > user_domain_name = Default
> > project_name = service
> > username = nova
> > password = xxxxxxxx
> >
> >
> > auth_uri = http://controller1:5000
> > auth_url = http://controller1:35357
> > memcached_servers = controller1:11211
> > auth_type = password
> > project_domain_name = Default
> > user_domain_name = Default
> > project_name = service
> > username = nova
> > password = xxxxxxxx
> >
> > Thanks in advance for any light you could shed on this.
> >
> > Regards
> > Andrea
>
>
>
> --
> Eugen Block                             voice   : +49-40-559 51 75
> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
> Postfach 61 03 15
> D-22423 Hamburg                         e-mail  : eblock at nde.ag
>
>          Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>            Sitz und Registergericht: Hamburg, HRB 90934
>                    Vorstand: Jens-U. Mozdzen
>                     USt-IdNr. DE 814 013 983
>
>
>
>
> ------------------------------
>
> Message: 26
> Date: Tue, 6 Mar 2018 08:53:53 +0000
> From: 谭 明宵 <tanmingxiao at outlook.com>
> To: openstack <openstack at lists.openstack.org>
> Subject: [Openstack] Some questions about "Cinder Multi-Attach" in
>         Openstack       Queens
> Message-ID:
>         <SG2PR02MB1851BBA8494D7CB2EF7FA72EB5D90 at SG2PR02MB1851.
> apcprd02.prod.outlook.com>
>
> Content-Type: text/plain; charset="utf-8"
>
> I  installed the openstack queens use devstack. I want to test the "Cinder
> Multi-Attach" function
>
> 1. create a  multiattach volume
> ```
> # cinder type-create multiattach
> # cinder type-key multiattach set multiattach="<is> True"
> #  cinder create 10 --name multiattach-volume --volume-type
> <volume_type_uuid>
> ```
> 2. attache the volume to two instances
> ```
> # nova volume-attach test01 <volume_uuid>
> # nova volume-attach test02 <volume_uuid>
> ```
> [cid:DCD455A4-5EDA-44C0-9DA9-877CCF9C679E at mailmaster]
> 3. mount the volume , create some file,but the file don't sync between the
> two instance,It seems that they are two independent volumes
> [cid:99ABEDC0-DE62-497F-931B-8F5276ADD2E1 at mailmaster]
>
> then test02 create a file,but i cannot find it in test01,The reverse is
> the same.
>
> [cid:C2C7FD23-2B2B-4060-9155-C9AC30953580 at mailmaster]
>
> I think i have something wrong,the test like the "share storage"
> What should the correct effect be like? thanks
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180306/398820cd/attachment.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: 7B194ED4-5D18-4FFA-9FF3-E54DB425E7E4.png
> Type: image/png
> Size: 21839 bytes
> Desc: 7B194ED4-5D18-4FFA-9FF3-E54DB425E7E4.png
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180306/398820cd/attachment.png>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: 4DFCCC80-5132-4383-B986-726664E45EAF.png
> Type: image/png
> Size: 35736 bytes
> Desc: 4DFCCC80-5132-4383-B986-726664E45EAF.png
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180306/398820cd/attachment-0001.png>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: FC3B5C89-087F-4BE5-9B02-C584E7E80BEA.png
> Type: image/png
> Size: 44703 bytes
> Desc: FC3B5C89-087F-4BE5-9B02-C584E7E80BEA.png
> URL: <http://lists.openstack.org/pipermail/openstack/
> attachments/20180306/398820cd/attachment-0002.png>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Openstack mailing list
> openstack at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
> ------------------------------
>
> End of Openstack Digest, Vol 57, Issue 1
> ****************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20180306/309b5167/attachment.html>


More information about the Openstack mailing list