[Openstack-operators] Queens metadata agent error 500

Ignazio Cassano ignaziocassano at gmail.com
Tue Nov 13 07:35:32 UTC 2018


Hi Chris,
many thanks for your answer.
It solved the issue.
Regards
Ignazio

Il giorno mar 13 nov 2018 alle ore 03:46 Chris Apsey <
bitskrieg at bitskrieg.net> ha scritto:

> Did you change the nova_metadata_ip option to nova_metadata_host in
> metadata_agent.ini?  The former value was deprecated several releases ago
> and now no longer functions as of pike.  The metadata service will throw
> 500 errors if you don't change it.
>
> On November 12, 2018 19:00:46 Ignazio Cassano <ignaziocassano at gmail.com>
> wrote:
>
>> Any other suggestion ?
>> It does not work.
>> Nova metatada is on port 8775 in listen but no way to solve this issue.
>> Thanks
>> Ignazio
>>
>> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
>> skaplons at redhat.com> ha scritto:
>>
>>> Hi,
>>>
>>> From logs which You attached it looks that Your neutron-metadata-agent
>>> can’t connect to nova-api service. Please check if nova-metadata-api is
>>> reachable from node where Your neutron-metadata-agent is running.
>>>
>>> > Wiadomość napisana przez Ignazio Cassano <ignaziocassano at gmail.com> w
>>> dniu 12.11.2018, o godz. 22:34:
>>> >
>>> > Hello again,
>>> > I have another installation of ocata .
>>> > On ocata the metadata for a network id is displayed by ps -afe like
>>> this:
>>> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
>>> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
>>> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
>>> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
>>> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
>>> --metadata_proxy_group=993
>>> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
>>> --log-dir=/var/log/neutron
>>> >
>>> > On queens like this:
>>> >  haproxy -f
>>> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
>>> >
>>> > Is it the correct behaviour ?
>>>
>>> Yes, that is correct. It was changed some time ago, see
>>> https://bugs.launchpad.net/neutron/+bug/1524916
>>>
>>> >
>>> > Regards
>>> > Ignazio
>>> >
>>> >
>>> >
>>> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
>>> skaplons at redhat.com> ha scritto:
>>> > Hi,
>>> >
>>> > Can You share logs from Your haproxy-metadata-proxy service which is
>>> running in qdhcp namespace? There should be some info about reason of those
>>> errors 500.
>>> >
>>> > > Wiadomość napisana przez Ignazio Cassano <ignaziocassano at gmail.com>
>>> w dniu 12.11.2018, o godz. 19:49:
>>> > >
>>> > > Hi All,
>>> > > I upgraded  manually my centos 7 openstack ocata to pike.
>>> > > All worked fine.
>>> > > Then I upgraded from pike to Queens and instances stopped to reach
>>> metadata on 169.254.169.254 with error 500.
>>> > > I am using isolated metadata true in my dhcp conf and in dhcp
>>> namespace  the port 80 is in listen.
>>> > > Please, anyone can help me?
>>> > > Regards
>>> > > Ignazio
>>> > >
>>> > > _______________________________________________
>>> > > OpenStack-operators mailing list
>>> > > OpenStack-operators at lists.openstack.org
>>> > >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> >
>>> > —
>>> > Slawek Kaplonski
>>> > Senior software engineer
>>> > Red Hat
>>> >
>>>
>>>>>> Slawek Kaplonski
>>> Senior software engineer
>>> Red Hat
>>>
>>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20181113/7fa54edc/attachment.html>


More information about the OpenStack-operators mailing list