[Openstack-operators] nova_api resource_providers table issues on ocata

Ignazio Cassano ignaziocassano at gmail.com
Thu Oct 18 19:00:37 UTC 2018


Hello, sorry for late in my answer....
the following is the content of my ocata repo file:

[centos-openstack-ocata]
name=CentOS-7 - OpenStack ocata
baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-ocata/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
exclude=sip,PyQt4


Epel is not enable as suggested in documentation-
Regards
Ignazio

Il giorno gio 18 ott 2018 alle ore 10:24 Sylvain Bauza <sbauza at redhat.com>
ha scritto:

>
>
> On Wed, Oct 17, 2018 at 4:46 PM Ignazio Cassano <ignaziocassano at gmail.com>
> wrote:
>
>> Hello, here the select you suggested:
>>
>> MariaDB [nova]> select * from shadow_services;
>> Empty set (0,00 sec)
>>
>> MariaDB [nova]> select * from shadow_compute_nodes;
>> Empty set (0,00 sec)
>>
>> As far as the upgrade tooling is concerned, we are using only yum update
>> on old compute nodes to have same packages installed on the new
>> compute-nodes
>>
>
>
> Well, to be honest, I was looking at some other bug for OSP
> https://bugzilla.redhat.com/show_bug.cgi?id=1636463 which is pretty
> identical so you're not alone :-)
> For some reason, yum update modifies something in the DB that I don't know
> yet. Which exact packages are you using ? RDO ones ?
>
> I marked the downstream bug as NOTABUG since I wasn't able to reproduce it
> and given I also provided a SQL query for fixing it, but maybe we should
> try to see which specific package has a problem...
>
> -Sylvain
>
>
> Procedure used
>> We have an openstack with 3 compute nodes : podto1-kvm01, podto1-kvm02,
>> podto1-kvm03
>> 1) install a new compute node (podto1-kvm04)
>> 2) On controller we discovered the new compute node: su -s /bin/sh -c
>> "nova-manage cell_v2 discover_hosts --verbose" nova
>> 3) Evacuate podto1-kvm01
>> 4) yum update on podto1-kvm01 and reboot it
>> 5) Evacuate podto1-kvm02
>> 6) yum update on podto1-kvm02 and reboot it
>> 7) Evacuate podto1-kvm03
>> 8) yum update podto1-kvm03 and reboot it
>>
>>
>>
>> Il giorno mer 17 ott 2018 alle ore 16:37 Matt Riedemann <
>> mriedemos at gmail.com> ha scritto:
>>
>>> On 10/17/2018 9:13 AM, Ignazio Cassano wrote:
>>> > Hello Sylvain, here the output of some selects:
>>> > MariaDB [nova]> select host,hypervisor_hostname from compute_nodes;
>>> > +--------------+---------------------+
>>> > | host         | hypervisor_hostname |
>>> > +--------------+---------------------+
>>> > | podto1-kvm01 | podto1-kvm01        |
>>> > | podto1-kvm02 | podto1-kvm02        |
>>> > | podto1-kvm03 | podto1-kvm03        |
>>> > | podto1-kvm04 | podto1-kvm04        |
>>> > | podto1-kvm05 | podto1-kvm05        |
>>> > +--------------+---------------------+
>>> >
>>> > MariaDB [nova]> select host from compute_nodes where
>>> host='podto1-kvm01'
>>> > and hypervisor_hostname='podto1-kvm01';
>>> > +--------------+
>>> > | host         |
>>> > +--------------+
>>> > | podto1-kvm01 |
>>> > +--------------+
>>>
>>> Does your upgrade tooling run a db archive/purge at all? It's possible
>>> that the actual services table record was deleted via the os-services
>>> REST API for some reason, which would delete the compute_nodes table
>>> record, and then a restart of the nova-compute process would recreate
>>> the services and compute_nodes table records, but with a new compute
>>> node uuid and thus a new resource provider.
>>>
>>> Maybe query your shadow_services and shadow_compute_nodes tables for
>>> "podto1-kvm01" and see if a record existed at one point, was deleted and
>>> then archived to the shadow tables.
>>>
>>> --
>>>
>>> Thanks,
>>>
>>> Matt
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20181018/1f4e70fe/attachment.html>


More information about the OpenStack-operators mailing list