New to OpenStack. Test environment with 1 control node and 2 compute nodes. Unable to migrate or resize instances
As the subject says, I am new to OpenStack, and have spun up a test cluster with one control node, and two compute nodes. It seems to work for several tasks (i.e. I can spin up a cirros instance on both of the compute nodes, which seems to work just fine) However, when I attempt to migrate an instance from one compute node to the other, the instance goes in to an error state. (In order to get it running again, I have to set the state to active, then reboot the instance) I am not sure which logs will have the required information to help debug this issue, but I have looked through nova and cinder and keystone logs. Nothing I am seeing is obvious to me as to the cause of the failures. Also taking a snapshot seems to fail as well, with this error: Error: Unable to create snapshot. Details Invalid input received: Invalid volume: Volume e12b3ff8-9870-45f9-b59a-c6fb194ed517 status must be available, in-use, but current status is: attaching. (HTTP 400) (Request-ID: req-499d3bf9-97a0-48b2-8565-254a87287cff) (HTTP 400) (Request-ID: req-d0cd5aed-894e-40b2-8a2f-4b22188ec372) Looking at the volumes in my environment, both volumes show a status of "attaching" (both are for a test cirros instance, both are type nfs, and both instances seem to work just fine (meaning I can connect to them either on the console or via ssh and run commands without errors) Is the reason that each of those tasks is failing because of the "attaching" status of the volumes? I spun up the cluster using packstack and all the nodes are installed on Rocky Linux 9.4. I added an nfs backend after packstack was all done, and that does seem to be working, as the ephemeral volumes used to spin up the test instances does seem to be on the nfs storage, rather than the default lvm storage backend. I have installed openstack using the bobcat release of openstack (from the centos-openstack-bobcat repository). I have been crawling through the standalone install documentation to see if I can find something that explains why the instance migration isn't working, but, again, nothing is jumping out at me. Any guidance on where I should be looking, or advice on what I may be missing would be greatly appreciated.
As the listed operations [migrate, resize, snapshot] are from nova, you should start looking in nova service logs, and that should direct you to look for other project logs(w.r.t to failed calls). From snapshot error: `Error: Unable to create snapshot. Details Invalid input received: Invalid volume: Volume e12b3ff8-9870-45f9-b59a-c6fb194ed517 status must be available, in-use, but current status is: attaching. (HTTP 400) (Request-ID: req-499d3bf9-97a0-48b2-8565-254a87287cff) (HTTP 400) (Request-ID: req-d0cd5aed-894e-40b2-8a2f-4b22188ec372) as you are able to use VM, so its not BFV. ` can you tell how did you attach the volume ? On successful attachment the volume status should be updated to "in-use" and not "attaching", and you should be able to mount and use volume from inside VM. is this by automation? create-attach-snapshot ? In that case, you might have called snapshot before attachment finished, hence attaching. Also, considering you are new to openstack, the amount of information, and chain, relase you have provided is excellent. Regards On Wed, Sep 11, 2024 at 2:04 AM <collinl@churchofjesuschrist.org> wrote:
As the subject says, I am new to OpenStack, and have spun up a test cluster with one control node, and two compute nodes. It seems to work for several tasks (i.e. I can spin up a cirros instance on both of the compute nodes, which seems to work just fine) However, when I attempt to migrate an instance from one compute node to the other, the instance goes in to an error state. (In order to get it running again, I have to set the state to active, then reboot the instance)
I am not sure which logs will have the required information to help debug this issue, but I have looked through nova and cinder and keystone logs. Nothing I am seeing is obvious to me as to the cause of the failures.
Also taking a snapshot seems to fail as well, with this error: Error: Unable to create snapshot. Details Invalid input received: Invalid volume: Volume e12b3ff8-9870-45f9-b59a-c6fb194ed517 status must be available, in-use, but current status is: attaching. (HTTP 400) (Request-ID: req-499d3bf9-97a0-48b2-8565-254a87287cff) (HTTP 400) (Request-ID: req-d0cd5aed-894e-40b2-8a2f-4b22188ec372)
Looking at the volumes in my environment, both volumes show a status of "attaching" (both are for a test cirros instance, both are type nfs, and both instances seem to work just fine (meaning I can connect to them either on the console or via ssh and run commands without errors)
Is the reason that each of those tasks is failing because of the "attaching" status of the volumes?
I spun up the cluster using packstack and all the nodes are installed on Rocky Linux 9.4. I added an nfs backend after packstack was all done, and that does seem to be working, as the ephemeral volumes used to spin up the test instances does seem to be on the nfs storage, rather than the default lvm storage backend.
I have installed openstack using the bobcat release of openstack (from the centos-openstack-bobcat repository).
I have been crawling through the standalone install documentation to see if I can find something that explains why the instance migration isn't working, but, again, nothing is jumping out at me.
Any guidance on where I should be looking, or advice on what I may be missing would be greatly appreciated.
Amit, thanks for the quick response. Not sure what you mean by "BFV" ? After your comments, I was able to make a little progress. I hadn't looked at the logs on the compute notes prior to that, I was only looking on the controller. I saw an error related to the compute note trying to make a connection to the database. I made that fix in the nova.conf on the compute nodes, and now when I spin up a new instance, the volume does correctly show "in-use" and that it is attached to the instance :-) However, I am still getting some errors that I don't know what to do with. On the controller, I am seeing a bunch of these: /var/log/nova/nova-metadata-api.log:2024-09-11 09:59:51.273 560137 ERROR nova.api.metadata.handler [None req-799be5d5-5a00-4eb9-a90b-8561f73949ab - - - - - -] Failed to get metadata for instance id: f3a05784-7404-4319-b878-d40538bf152d: keystoneauth1.exceptions.http.Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-c3aea86b-3bdf-409c-8862-27c98d98aaaa) And on the compute node, I see this: nova-compute.log:2024-09-11 09:56:38.628 45664 WARNING nova.compute.manager [req-af634eb4-cf6a-4461-be14-fe4b4f0b6f87 req-a83bf6fc-ea9a-42e7-b841-8a46a4168eef 46f271442f53434dbf6b35719b8fbc2d a938e696c70f4469884ab4d5f5dcf2ac - - default default] [instance: f3a05784-7404-4319-b878-d40538bf152d] Received unexpected event network-vif-plugged-eba53055-f08d-423d-8ace-474e0af51b0c for instance with vm_state active and task_state None. When trying to create a snapshot, it gets further, but still reports an error. The message is "create snapshot:Snapshot failed to create." Here is what I see for the volume and volume snapshot output: [root@l21651 56522c30c45d0a9afe5818208e55a58b(keystone_admin)]# openstack volume list +--------------------------------------+------+--------+------+-----------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------+--------+------+-----------------------------------+ | 3cf3c7e3-dc43-4537-a3e7-8b20a4d35927 | | in-use | 1 | Attached to snaptest on /dev/vda | +--------------------------------------+------+--------+------+-----------------------------------+ [root@l21651 56522c30c45d0a9afe5818208e55a58b(keystone_admin)]# openstack volume snapshot list +--------------------------------------+--------------------------------+-------------+--------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+--------------------------------+-------------+--------+------+ | a0afabe0-158e-4858-86e6-7f9682cb4433 | snapshot for snaptest_snapshot | | error | 1 | +--------------------------------------+--------------------------------+-------------+--------+------+ . . .Not sure if I believe that though, as if I look in the nfs directory, I see that there does appear to be a snapshot created: -rw-rw-rw-. 1 107 107 1.0G Sep 11 10:04 volume-3cf3c7e3-dc43-4537-a3e7-8b20a4d35927 -rw-rw-rw-. 1 root root 193K Sep 11 10:16 volume-3cf3c7e3-dc43-4537-a3e7-8b20a4d35927.a0afabe0-158e-4858-86e6-7f9682cb4433 And, it appears to be the right kind of file: [root@l21651 56522c30c45d0a9afe5818208e55a58b(keystone_admin)]# file volume-3cf3c7e3-dc43-4537-a3e7-8b20a4d35927.a0afabe0-158e-4858-86e6-7f9682cb4433 volume-3cf3c7e3-dc43-4537-a3e7-8b20a4d35927.a0afabe0-158e-4858-86e6-7f9682cb4433: QEMU QCOW2 Image (v3), has backing file (path volume-3cf3c7e3-dc43-4537-a3e7-8b20a4d35927), 1073741824 bytes Here is another error that may be related to this issue? /var/log/cinder/volume.log:2024-09-11 11:38:51.743 1196213 ERROR cinder.volume.drivers.remotefs [req-e8eab520-53bf-405c-b2c7-3be5204ecf1c req-b0d1d655-1567-46aa-8809-ebc62760e0e6 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Call to Nova to create snapshot failed: keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-279a7830-5c12-4997-9560-857c30742a63) How do I add a domain to the project? On a positive note, when trying the migrate, I did find/fix an error in my sudoers config, and now I am able to live migrate, so it is just the snapshot error that I am trying to fix currently.
On Thu, Sep 12, 2024 at 1:44 AM <collinl@churchofjesuschrist.org> wrote:
Amit, thanks for the quick response.
Not sure what you mean by "BFV" ?
Boot from Volume; but in your case you booted from image.
After your comments, I was able to make a little progress. I hadn't looked at the logs on the compute notes prior to that, I was only looking on the controller. I saw an error related to the compute note trying to make a connection to the database. I made that fix in the nova.conf on the compute nodes, and now when I spin up a new instance, the volume does correctly show "in-use" and that it is attached to the instance :-)
However, I am still getting some errors that I don't know what to do with. On the controller, I am seeing a bunch of these: /var/log/nova/nova-metadata-api.log:2024-09-11 09:59:51.273 560137 ERROR nova.api.metadata.handler [None req-799be5d5-5a00-4eb9-a90b-8561f73949ab - - - - - -] Failed to get metadata for instance id: f3a05784-7404-4319-b878-d40538bf152d: keystoneauth1.exceptions.http.Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-c3aea86b-3bdf-409c-8862-27c98d98aaaa)
This is from keystone, the operator-user is not authorized to get instance_info, but you were able to create an instance with this user, so !!! I haven't tried packstack, but the installer should take care of this for admin user, so unless you are creating an new tenant and all resources inside it.
And on the compute node, I see this: nova-compute.log:2024-09-11 09:56:38.628 45664 WARNING nova.compute.manager [req-af634eb4-cf6a-4461-be14-fe4b4f0b6f87 req-a83bf6fc-ea9a-42e7-b841-8a46a4168eef 46f271442f53434dbf6b35719b8fbc2d a938e696c70f4469884ab4d5f5dcf2ac - - default default] [instance: f3a05784-7404-4319-b878-d40538bf152d] Received unexpected event network-vif-plugged-eba53055-f08d-423d-8ace-474e0af51b0c for instance with vm_state active and task_state None.
When trying to create a snapshot, it gets further, but still reports an error. The message is "create snapshot:Snapshot failed to create." Here is what I see for the volume and volume snapshot output: [root@l21651 56522c30c45d0a9afe5818208e55a58b(keystone_admin)]# openstack volume list
+--------------------------------------+------+--------+------+-----------------------------------+ | ID | Name | Status | Size | Attached to |
+--------------------------------------+------+--------+------+-----------------------------------+ | 3cf3c7e3-dc43-4537-a3e7-8b20a4d35927 | | in-use | 1 | Attached to snaptest on /dev/vda |
+--------------------------------------+------+--------+------+-----------------------------------+ [root@l21651 56522c30c45d0a9afe5818208e55a58b(keystone_admin)]# openstack volume snapshot list
+--------------------------------------+--------------------------------+-------------+--------+------+ | ID | Name | Description | Status | Size |
+--------------------------------------+--------------------------------+-------------+--------+------+ | a0afabe0-158e-4858-86e6-7f9682cb4433 | snapshot for snaptest_snapshot | | error | 1 |
+--------------------------------------+--------------------------------+-------------+--------+------+
. . .Not sure if I believe that though, as if I look in the nfs directory, I see that there does appear to be a snapshot created: -rw-rw-rw-. 1 107 107 1.0G Sep 11 10:04 volume-3cf3c7e3-dc43-4537-a3e7-8b20a4d35927 -rw-rw-rw-. 1 root root 193K Sep 11 10:16 volume-3cf3c7e3-dc43-4537-a3e7-8b20a4d35927.a0afabe0-158e-4858-86e6-7f9682cb4433
And, it appears to be the right kind of file: [root@l21651 56522c30c45d0a9afe5818208e55a58b(keystone_admin)]# file volume-3cf3c7e3-dc43-4537-a3e7-8b20a4d35927.a0afabe0-158e-4858-86e6-7f9682cb4433 volume-3cf3c7e3-dc43-4537-a3e7-8b20a4d35927.a0afabe0-158e-4858-86e6-7f9682cb4433: QEMU QCOW2 Image (v3), has backing file (path volume-3cf3c7e3-dc43-4537-a3e7-8b20a4d35927), 1073741824 bytes
Here is another error that may be related to this issue? /var/log/cinder/volume.log:2024-09-11 11:38:51.743 1196213 ERROR cinder.volume.drivers.remotefs [req-e8eab520-53bf-405c-b2c7-3be5204ecf1c req-b0d1d655-1567-46aa-8809-ebc62760e0e6 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Call to Nova to create snapshot failed: keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-279a7830-5c12-4997-9560-857c30742a63)
How do I add a domain to the project?
On a positive note, when trying the migrate, I did find/fix an error in my sudoers config, and now I am able to live migrate, so it is just the snapshot error that I am trying to fix currently.
So it seems like its an identity issue only. So, now I would go like this: 1- see here first https://docs.openstack.org/newton/install-guide-obs/keystone-users.html - fix if anything is wrong/pending 2- create VM 3- create snapshot using cmd openstack server image create # this creates a snapshot image of server in glance, from which you can create next server. 4 - from existing errors you are getting, it seems it will fix from keystone docs, but if fails 4.1 look for what really happend in image-create request, you can try --debug with above cmd 4.2 or look via req-id, openstack server event list <server-id> 4.2.1 take req-id of create iameg action, it looks like `req-uuid` 4.2.2 grep for this req-id in nova-logs 4.2.3 to understand better start with nova-api logs; nova-conductor then nova-compute logs. 4.2.4 look for at what time, it called glance API. At this moment, you would know the flow of requests, and why it failed. If things are still same, tell us the cmds as well which you tried. Regads
Amit, Thanks again for your response. I *believe* things are/were set up correctly per the install guide. That guide just shows how to create a new setup, but doesn't show how to verify an existing user/role. I was able to find this command that I think shows all the user/role assignments that are currently set up: [root@l21651 ~(keystone_admin)]# openstack role assignment list --names +----------+--------------------+-------+------------------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +----------+--------------------+-------+------------------+--------+--------+-----------+ | admin | placement@Default | | services@Default | | | False | | admin | glance@Default | | services@Default | | | False | | admin | gnocchi@Default | | services@Default | | | False | | admin | nova@Default | | services@Default | | | False | | user | demo@Default | | demo@Default | | | False | | _member_ | demo@Default | | demo@Default | | | False | | admin | ceilometer@Default | | services@Default | | | False | | admin | aodh@Default | | services@Default | | | False | | admin | admin@Default | | admin@Default | | | False | | admin | neutron@Default | | services@Default | | | False | | admin | cinder@Default | | services@Default | | | False | | admin | swift@Default | | services@Default | | | False | | admin | admin@Default | | | | all | False | +----------+--------------------+-------+------------------+--------+--------+-----------+ I am running as admin, so I *should* have all the rights needed for creating snapshots, correct? Here is my file I source to run commands (password is redacted) unset OS_SERVICE_TOKEN export OS_USERNAME=admin export OS_PASSWORD='<redacted>' export OS_REGION_NAME=RegionOne export OS_AUTH_URL=http://10.61.157.80:5000/v3 export PS1='[\u@\h \W(keystone_admin)]\$ ' export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_IDENTITY_API_VERSION=3 I did turn up the debugging in all the config files, and restarted services to make sure it took effect. I am still getting the same error. here is the command I ran to create the image: openstack server image create --debug cirrosinstance That appeared to complete successfully from the commandline, but the snapshots from my attempts are all in an error state still: [root@l21651 ~(keystone_admin)]# openstack volume snapshot list +--------------------------------------+-----------------------------+-------------+--------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+-----------------------------+-------------+--------+------+ | 4b603a9d-81ec-498c-8039-0abbc67023dc | snapshot for cirrosinstance | | error | 1 | | cfcdf62d-142e-4e10-9db8-f87975d29289 | snapshot for cirrosinstance | | error | 1 | | 838ac161-04ed-42d5-8a6b-4b283599fffa | snapshot for cirrosinstance | | error | 1 | +--------------------------------------+-----------------------------+-------------+--------+------+ Here are all the log entries from the controller (I didn't see any on the compute nodes) related to one of the attempts: /var/log/cinder/api.log:2024-09-12 16:00:52.551 3606 DEBUG cinder.api.middleware.request_id [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-57287960-b85f-4913-b3ed-bbb5894824c0 - - - - - -] RequestId filter calling following filter/app _context_setter /usr/lib/python3.9/site-packages/cinder/api/middleware/request_id.py:62 /var/log/cinder/api.log:2024-09-12 16:00:52.749 3606 WARNING keystonemiddleware.auth_token [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-57287960-b85f-4913-b3ed-bbb5894824c0 - - - - - -] A valid token was submitted as a service token, but it was not a valid service token. This is incorrect but backwards compatible behaviour. This will be removed in future releases. /var/log/cinder/api.log:2024-09-12 16:00:52.932 3606 INFO cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-57287960-b85f-4913-b3ed-bbb5894824c0 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] GET http://10.61.157.80:8776/v3/limits /var/log/cinder/api.log:2024-09-12 16:00:52.932 3606 DEBUG cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-57287960-b85f-4913-b3ed-bbb5894824c0 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Empty body provided in request get_body /usr/lib/python3.9/site-packages/cinder/api/openstack/wsgi.py:715 /var/log/cinder/api.log:2024-09-12 16:00:52.932 3606 DEBUG cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-57287960-b85f-4913-b3ed-bbb5894824c0 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Calling method 'index' _process_stack /usr/lib/python3.9/site-packages/cinder/api/openstack/wsgi.py:868 /var/log/cinder/api.log:2024-09-12 16:00:52.953 3606 INFO cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-57287960-b85f-4913-b3ed-bbb5894824c0 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] http://10.61.157.80:8776/v3/limits returned with HTTP 200 /var/log/cinder/api.log:2024-09-12 16:00:52.954 3606 INFO eventlet.wsgi.server [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-57287960-b85f-4913-b3ed-bbb5894824c0 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] 10.61.157.80 "GET /v3/limits HTTP/1.1" status: 200 len: 629 time: 0.4035914 /var/log/cinder/api.log:2024-09-12 16:00:52.999 3606 DEBUG cinder.api.middleware.request_id [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-3cbe7b96-c40f-4a0f-8120-c5bcf28a760a - - - - - -] RequestId filter calling following filter/app _context_setter /usr/lib/python3.9/site-packages/cinder/api/middleware/request_id.py:62 /var/log/cinder/api.log:2024-09-12 16:00:53.001 3606 INFO cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-3cbe7b96-c40f-4a0f-8120-c5bcf28a760a 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] GET http://10.61.157.80:8776/v3/volumes/0aaa3dfe-ba0e-4306-b37a-10d847b1c264 /var/log/cinder/api.log:2024-09-12 16:00:53.002 3606 DEBUG cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-3cbe7b96-c40f-4a0f-8120-c5bcf28a760a 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Empty body provided in request get_body /usr/lib/python3.9/site-packages/cinder/api/openstack/wsgi.py:715 /var/log/cinder/api.log:2024-09-12 16:00:53.002 3606 DEBUG cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-3cbe7b96-c40f-4a0f-8120-c5bcf28a760a 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Calling method 'show' _process_stack /usr/lib/python3.9/site-packages/cinder/api/openstack/wsgi.py:868 /var/log/cinder/api.log:2024-09-12 16:00:53.017 3606 INFO cinder.volume.api [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-3cbe7b96-c40f-4a0f-8120-c5bcf28a760a 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Volume info retrieved successfully. /var/log/cinder/api.log:2024-09-12 16:00:53.023 3606 INFO cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-3cbe7b96-c40f-4a0f-8120-c5bcf28a760a 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] http://10.61.157.80:8776/v3/volumes/0aaa3dfe-ba0e-4306-b37a-10d847b1c264 returned with HTTP 200 /var/log/cinder/api.log:2024-09-12 16:00:53.024 3606 INFO eventlet.wsgi.server [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-3cbe7b96-c40f-4a0f-8120-c5bcf28a760a 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] 10.61.157.80 "GET /v3/volumes/0aaa3dfe-ba0e-4306-b37a-10d847b1c264 HTTP/1.1" status: 200 len: 1992 time: 0.0256529 /var/log/cinder/api.log:2024-09-12 16:00:53.029 3606 DEBUG cinder.api.middleware.request_id [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 - - - - - -] RequestId filter calling following filter/app _context_setter /usr/lib/python3.9/site-packages/cinder/api/middleware/request_id.py:62 /var/log/cinder/api.log:2024-09-12 16:00:53.031 3606 INFO cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] POST http://10.61.157.80:8776/v3/snapshots /var/log/cinder/api.log:2024-09-12 16:00:53.032 3606 DEBUG cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Action: 'create', calling method: create, body: {"snapshot": {"volume_id": "0aaa3dfe-ba0e-4306-b37a-10d847b1c264", "force": true, "name": "snapshot for cirrosinstance", "description": "", "metadata": {}}} _process_stack /usr/lib/python3.9/site-packages/cinder/api/openstack/wsgi.py:866 /var/log/cinder/api.log:2024-09-12 16:00:53.048 3606 INFO cinder.volume.api [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Volume info retrieved successfully. /var/log/cinder/api.log:2024-09-12 16:00:53.049 3606 INFO cinder.api.v3.snapshots [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Create snapshot from volume 0aaa3dfe-ba0e-4306-b37a-10d847b1c264 /var/log/cinder/api.log:2024-09-12 16:00:53.088 3606 DEBUG cinder.quota [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Created reservations ['e24a54da-152f-47b8-80e1-4ee3c1a2e057', '2770eb0b-b210-4e56-824f-52a64c0ec7c3', 'b78e98c7-2b8f-4298-9626-1c7cc1f559b9', 'd9524731-94ca-49ce-b997-57683896a7aa'] reserve /usr/lib/python3.9/site-packages/cinder/quota.py:755 /var/log/cinder/api.log:2024-09-12 16:00:53.122 3606 INFO cinder.volume.api [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Snapshot force create request issued successfully. /var/log/cinder/api.log:2024-09-12 16:00:53.123 3606 INFO cinder.api.openstack.wsgi [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] http://10.61.157.80:8776/v3/snapshots returned with HTTP 202 /var/log/cinder/api.log:2024-09-12 16:00:53.124 3606 INFO eventlet.wsgi.server [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] 10.61.157.80 "POST /v3/snapshots HTTP/1.1" status: 202 len: 617 time: 0.0946217 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.133 1646 DEBUG cinder.scheduler.host_manager [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Updating capabilities for l21651@lvm#lvm: {'pool_name': 'lvm', 'total_capacity_gb': 19.57, 'free_capacity_gb': 19.57, 'reserved_percentage': 0, 'location_info': 'LVMVolumeDriver:l21651:cinder-volumes:thin:0', 'QoS_support': False, 'provisioned_capacity_gb': 0.0, 'max_over_subscription_ratio': '20.0', 'thin_provisioning_support': True, 'thick_provisioning_support': False, 'total_volumes': 1, 'filter_function': None, 'goodness_function': None, 'multiattach': True, 'backend_state': 'up', 'allocated_capacity_gb': 0, 'cacheable': True, 'volume_backend_name': 'lvm', 'storage_protocol': ['iSCSI', 'iscsi'], 'vendor_name': 'Open Source', 'driver_version': '3.0.0', 'timestamp': datetime.datetime(2024, 9, 12, 22, 0, 8, 38146)} update_from_volume_capability /usr/lib/python3.9/site-packages/cinder/scheduler/host_manager.py:415 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.133 1646 DEBUG cinder.scheduler.host_manager [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Updating capabilities for l21651@nfs#nfs: {'volume_backend_name': 'nfs', 'vendor_name': 'Open Source', 'driver_version': '1.4.0', 'storage_protocol': ['NFS', 'nfs'], 'total_capacity_gb': 100.0, 'free_capacity_gb': 99.8924560546875, 'reserved_percentage': 0, 'QoS_support': False, 'sparse_copy_volume': True, 'provisioned_capacity_gb': 25.0, 'max_over_subscription_ratio': 20.0, 'thin_provisioning_support': True, 'thick_provisioning_support': False, 'allocated_capacity_gb': 2, 'filter_function': None, 'goodness_function': None, 'timestamp': datetime.datetime(2024, 9, 12, 21, 59, 53, 443615)} update_from_volume_capability /usr/lib/python3.9/site-packages/cinder/scheduler/host_manager.py:415 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.134 1646 DEBUG cinder.scheduler.base_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Starting with 2 host(s) get_filtered_objects /usr/lib/python3.9/site-packages/cinder/scheduler/base_filter.py:97 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.134 1646 DEBUG cinder.scheduler.base_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Filter AvailabilityZoneFilter returned 2 host(s) get_filtered_objects /usr/lib/python3.9/site-packages/cinder/scheduler/base_filter.py:125 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.134 1646 DEBUG cinder.scheduler.filters.capacity_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Checking if host l21651@nfs#nfs can create a 1 GB volume (None) backend_passes /usr/lib/python3.9/site-packages/cinder/scheduler/filters/capacity_filter.py:57 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.135 1646 DEBUG cinder.scheduler.filters.capacity_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Storage Capacity factors {'total_capacity': 100.0, 'free_capacity': 99.8924560546875, 'reserved_capacity': 0, 'total_reserved_available_capacity': 100.0, 'max_over_subscription_ratio': 20.0, 'total_available_capacity': 2000.0, 'provisioned_capacity': 25.0, 'calculated_free_capacity': 1975.0, 'virtual_free_capacity': 1975.0, 'free_percent': 98.75, 'provisioned_ratio': 0.01, 'provisioned_type': 'thin'} backend_passes /usr/lib/python3.9/site-packages/cinder/scheduler/filters/capacity_filter.py:137 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.135 1646 DEBUG cinder.scheduler.filters.capacity_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Checking provisioning for request of 1 GB. Backend: host 'l21651@nfs#nfs': free_capacity_gb: 99.8924560546875, total_capacity_gb: 100.0, allocated_capacity_gb: 2, max_over_subscription_ratio: 20.0, reserved_percentage: 0, provisioned_capacity_gb: 25.0, thin_provisioning_support: True, thick_provisioning_support: False, pools: {}, updated at: 2024-09-12 21:59:53.443615 backend_passes /usr/lib/python3.9/site-packages/cinder/scheduler/filters/capacity_filter.py:155 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.135 1646 DEBUG cinder.scheduler.filters.capacity_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Space information for volume creation on host l21651@nfs#nfs (requested / avail): 1/1975.0 backend_passes /usr/lib/python3.9/site-packages/cinder/scheduler/filters/capacity_filter.py:186 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.135 1646 DEBUG cinder.scheduler.filters.capacity_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Checking if host l21651@lvm#lvm can create a 1 GB volume (None) backend_passes /usr/lib/python3.9/site-packages/cinder/scheduler/filters/capacity_filter.py:57 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.136 1646 DEBUG cinder.scheduler.filters.capacity_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Storage Capacity factors {'total_capacity': 19.57, 'free_capacity': 19.57, 'reserved_capacity': 0, 'total_reserved_available_capacity': 19.57, 'max_over_subscription_ratio': 20.0, 'total_available_capacity': 391.4, 'provisioned_capacity': 0.0, 'calculated_free_capacity': 391.4, 'virtual_free_capacity': 391.4, 'free_percent': 100.0, 'provisioned_ratio': 0.0, 'provisioned_type': 'thin'} backend_passes /usr/lib/python3.9/site-packages/cinder/scheduler/filters/capacity_filter.py:137 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.136 1646 DEBUG cinder.scheduler.filters.capacity_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Checking provisioning for request of 1 GB. Backend: host 'l21651@lvm#lvm': free_capacity_gb: 19.57, total_capacity_gb: 19.57, allocated_capacity_gb: 0, max_over_subscription_ratio: 20.0, reserved_percentage: 0, provisioned_capacity_gb: 0.0, thin_provisioning_support: True, thick_provisioning_support: False, pools: {}, updated at: 2024-09-12 22:00:08.038146 backend_passes /usr/lib/python3.9/site-packages/cinder/scheduler/filters/capacity_filter.py:155 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.136 1646 DEBUG cinder.scheduler.filters.capacity_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Space information for volume creation on host l21651@lvm#lvm (requested / avail): 1/391.4 backend_passes /usr/lib/python3.9/site-packages/cinder/scheduler/filters/capacity_filter.py:186 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.136 1646 DEBUG cinder.scheduler.base_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Filter CapacityFilter returned 2 host(s) get_filtered_objects /usr/lib/python3.9/site-packages/cinder/scheduler/base_filter.py:125 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.136 1646 DEBUG cinder.scheduler.base_filter [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Filter CapabilitiesFilter returned 2 host(s) get_filtered_objects /usr/lib/python3.9/site-packages/cinder/scheduler/base_filter.py:125 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.136 1646 DEBUG cinder.scheduler.filter_scheduler [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Filtered [host 'l21651@nfs#nfs': free_capacity_gb: 99.8924560546875, total_capacity_gb: 100.0, allocated_capacity_gb: 2, max_over_subscription_ratio: 20.0, reserved_percentage: 0, provisioned_capacity_gb: 25.0, thin_provisioning_support: True, thick_provisioning_support: False, pools: {}, updated at: 2024-09-12 21:59:53.443615, host 'l21651@lvm#lvm': free_capacity_gb: 19.57, total_capacity_gb: 19.57, allocated_capacity_gb: 0, max_over_subscription_ratio: 20.0, reserved_percentage: 0, provisioned_capacity_gb: 0.0, thin_provisioning_support: True, thick_provisioning_support: False, pools: {}, updated at: 2024-09-12 22:00:08.038146] _get_weighted_candidates /usr/lib/python3.9/site-packages/cinder/scheduler/filter_scheduler.py:358 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.137 1646 DEBUG cinder.scheduler.base_weight [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Weigher CapacityWeigher returned, weigher value is {max: 1975.0, min: 391.4} get_weighed_objects /usr/lib/python3.9/site-packages/cinder/scheduler/base_weight.py:156 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.137 1646 DEBUG cinder.scheduler.host_manager [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Weighed [WeighedHost [host: l21651@nfs#nfs, weight: 1.0], WeighedHost [host: l21651@lvm#lvm, weight: 0.0]] get_weighed_backends /usr/lib/python3.9/site-packages/cinder/scheduler/host_manager.py:565 /var/log/cinder/scheduler.log:2024-09-12 16:00:53.137 1646 DEBUG cinder.scheduler.host_manager [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Consumed 1 GB from backend: host 'l21651@nfs#nfs': free_capacity_gb: 98.8924560546875, total_capacity_gb: 100.0, allocated_capacity_gb: 3, max_over_subscription_ratio: 20.0, reserved_percentage: 0, provisioned_capacity_gb: 26.0, thin_provisioning_support: True, thick_provisioning_support: False, pools: {}, updated at: 2024-09-12 22:00:53.137624 consume_from_volume /usr/lib/python3.9/site-packages/cinder/scheduler/host_manager.py:354 /var/log/cinder/volume.log:2024-09-12 16:00:53.190 3607 DEBUG cinder.coordination [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Acquiring lock "/var/lib/cinder/cinder-nfs-0aaa3dfe-ba0e-4306-b37a-10d847b1c264" by "create_snapshot" __acquire /usr/lib/python3.9/site-packages/cinder/coordination.py:143 /var/log/cinder/volume.log:2024-09-12 16:00:53.191 3607 DEBUG cinder.coordination [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Lock "/var/lib/cinder/cinder-nfs-0aaa3dfe-ba0e-4306-b37a-10d847b1c264" acquired by "create_snapshot" :: waited 0.002s __acquire /usr/lib/python3.9/site-packages/cinder/coordination.py:146 /var/log/cinder/volume.log:2024-09-12 16:00:53.192 3607 DEBUG cinder.volume.drivers.remotefs [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Creating online snapshot 4b603a9d-81ec-498c-8039-0abbc67023dc of volume 0aaa3dfe-ba0e-4306-b37a-10d847b1c264 _create_snapshot /usr/lib/python3.9/site-packages/cinder/volume/drivers/remotefs.py:1695 /var/log/cinder/volume.log:2024-09-12 16:00:53.224 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=60 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info -f raw --output=json --force-share /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384 /var/log/cinder/volume.log:2024-09-12 16:00:53.804 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=60 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info -f raw --output=json --force-share /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422 /var/log/cinder/volume.log:2024-09-12 16:00:53.806 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=60 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info -f luks --output=json --force-share /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384 /var/log/cinder/volume.log:2024-09-12 16:00:54.321 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=60 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info -f luks --output=json --force-share /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264" returned: 1 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422 /var/log/cinder/volume.log:2024-09-12 16:00:54.322 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] '/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=60 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info -f luks --output=json --force-share /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473 /var/log/cinder/volume.log:2024-09-12 16:00:54.323 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img create -f qcow2 -o backing_file=/var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264,backing_fmt=raw /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264.4b603a9d-81ec-498c-8039-0abbc67023dc 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384 /var/log/cinder/volume.log:2024-09-12 16:00:54.807 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img create -f qcow2 -o backing_file=/var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264,backing_fmt=raw /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264.4b603a9d-81ec-498c-8039-0abbc67023dc 1G" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422 /var/log/cinder/volume.log:2024-09-12 16:00:54.808 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img rebase -u -b volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264 -F raw /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264.4b603a9d-81ec-498c-8039-0abbc67023dc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384 /var/log/cinder/volume.log:2024-09-12 16:00:55.203 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img rebase -u -b volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264 -F raw /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264.4b603a9d-81ec-498c-8039-0abbc67023dc" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422 /var/log/cinder/volume.log:2024-09-12 16:00:55.204 3607 WARNING cinder.volume.drivers.remotefs [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264.4b603a9d-81ec-498c-8039-0abbc67023dc is being set with open permissions: ugo+rw /var/log/cinder/volume.log:2024-09-12 16:00:55.204 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf chmod ugo+rw /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264.4b603a9d-81ec-498c-8039-0abbc67023dc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384 /var/log/cinder/volume.log:2024-09-12 16:00:55.586 3607 DEBUG oslo_concurrency.processutils [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf chmod ugo+rw /var/lib/cinder/mnt/56522c30c45d0a9afe5818208e55a58b/volume-0aaa3dfe-ba0e-4306-b37a-10d847b1c264.4b603a9d-81ec-498c-8039-0abbc67023dc" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422 /var/log/cinder/volume.log:2024-09-12 16:00:55.588 3607 DEBUG cinder.compute.nova [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Creating Keystone token plugin using URL: http://10.61.157.80:5000 novaclient /usr/lib/python3.9/site-packages/cinder/compute/nova.py:106 /var/log/cinder/volume.log:2024-09-12 16:00:55.713 3607 ERROR cinder.volume.drivers.remotefs [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Call to Nova to create snapshot failed: keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-6cc2a596-b980-43b9-b3dc-5eff336a4949) /var/log/cinder/volume.log:2024-09-12 16:00:55.714 3607 DEBUG cinder.coordination [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Lock "/var/lib/cinder/cinder-nfs-0aaa3dfe-ba0e-4306-b37a-10d847b1c264" released by "create_snapshot" :: held 2.523s __release /usr/lib/python3.9/site-packages/cinder/coordination.py:157 /var/log/cinder/volume.log:2024-09-12 16:00:55.721 3607 INFO cinder.message.api [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Creating message record for request_id = req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 /var/log/cinder/volume.log:2024-09-12 16:00:55.730 3607 ERROR oslo_messaging.rpc.server [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Exception during message handling: keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-6cc2a596-b980-43b9-b3dc-5eff336a4949) /var/log/glance/api.log:2024-09-12 16:00:53.966 3622 INFO eventlet.wsgi.server [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] 10.61.157.80 - - [12/Sep/2024 16:00:53] "GET /v2/schemas/image HTTP/1.1" 200 6283 0.822451 /var/log/glance/api.log:2024-09-12 16:00:53.970 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Determining version of request: GET /v2/schemas/image Accept: */* process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:44 /var/log/glance/api.log:2024-09-12 16:00:53.970 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Using url versioning process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:57 /var/log/glance/api.log:2024-09-12 16:00:53.970 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Matched version: v2 process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:69 /var/log/glance/api.log:2024-09-12 16:00:53.971 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] new path /v2/schemas/image process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:70 /var/log/glance/api.log:2024-09-12 16:00:53.973 3622 INFO eventlet.wsgi.server [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] 10.61.157.80 - - [12/Sep/2024 16:00:53] "GET /v2/schemas/image HTTP/1.1" 200 6283 0.003874 /var/log/glance/api.log:2024-09-12 16:00:54.037 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Determining version of request: POST /v2/images Accept: */* process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:44 /var/log/glance/api.log:2024-09-12 16:00:54.038 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Using url versioning process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:57 /var/log/glance/api.log:2024-09-12 16:00:54.038 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Matched version: v2 process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:69 /var/log/glance/api.log:2024-09-12 16:00:54.038 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] new path /v2/images process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:70 /var/log/glance/api.log:2024-09-12 16:00:54.082 3622 INFO eventlet.wsgi.server [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] 10.61.157.80 - - [12/Sep/2024 16:00:54] "POST /v2/images HTTP/1.1" 201 2105 0.045671 /var/log/glance/api.log:2024-09-12 16:00:54.093 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Determining version of request: PUT /v2/images/bb02f9bc-203a-4655-af53-c6e8f6f231e0/file Accept: */* process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:44 /var/log/glance/api.log:2024-09-12 16:00:54.093 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Using url versioning process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:57 /var/log/glance/api.log:2024-09-12 16:00:54.093 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Matched version: v2 process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:69 /var/log/glance/api.log:2024-09-12 16:00:54.094 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] new path /v2/images/bb02f9bc-203a-4655-af53-c6e8f6f231e0/file process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:70 /var/log/glance/api.log:2024-09-12 16:00:54.102 3622 DEBUG glance.common.utils [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Sorted locations: [] sort_image_locations /usr/lib/python3.9/site-packages/glance/common/utils.py:736 /var/log/glance/api.log:2024-09-12 16:00:54.103 3622 INFO glance.api.v2.image_data [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. /var/log/glance/api.log:2024-09-12 16:00:54.126 3622 DEBUG glance.location [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Enabling in-flight format inspection for qcow2 set_data /usr/lib/python3.9/site-packages/glance/location.py:581 /var/log/glance/api.log:2024-09-12 16:00:54.127 3622 DEBUG glance_store._drivers.filesystem [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Wrote 0 bytes to /var/lib/glance/images/bb02f9bc-203a-4655-af53-c6e8f6f231e0 with checksum d41d8cd98f00b204e9800998ecf8427e and multihash cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e add /usr/lib/python3.9/site-packages/glance_store/_drivers/filesystem.py:773 /var/log/glance/api.log:2024-09-12 16:00:54.127 3622 WARNING glance.location [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Image format qcow2 did not match; unable to calculate virtual size /var/log/glance/api.log:2024-09-12 16:00:54.166 3622 DEBUG glance_store.multi_backend [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Skipping store.set_acls... not implemented set_acls_for_multi_store /usr/lib/python3.9/site-packages/glance_store/multi_backend.py:548 /var/log/glance/api.log:2024-09-12 16:00:54.169 3622 INFO eventlet.wsgi.server [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] 10.61.157.80 - - [12/Sep/2024 16:00:54] "PUT /v2/images/bb02f9bc-203a-4655-af53-c6e8f6f231e0/file HTTP/1.1" 204 213 0.076184 /var/log/glance/api.log:2024-09-12 16:00:54.175 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Determining version of request: GET /v2/images/bb02f9bc-203a-4655-af53-c6e8f6f231e0 Accept: */* process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:44 /var/log/glance/api.log:2024-09-12 16:00:54.175 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Using url versioning process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:57 /var/log/glance/api.log:2024-09-12 16:00:54.176 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Matched version: v2 process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:69 /var/log/glance/api.log:2024-09-12 16:00:54.176 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] new path /v2/images/bb02f9bc-203a-4655-af53-c6e8f6f231e0 process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:70 /var/log/glance/api.log:2024-09-12 16:00:54.188 3622 DEBUG glance.common.utils [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Sorted locations: [{'id': 12, 'url': 'file:///var/lib/glance/images/bb02f9bc-203a-4655-af53-c6e8f6f231e0', 'metadata': {'store': 'file'}, 'status': 'active'}] sort_image_locations /usr/lib/python3.9/site-packages/glance/common/utils.py:736 /var/log/glance/api.log:2024-09-12 16:00:54.190 3622 INFO eventlet.wsgi.server [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] 10.61.157.80 - - [12/Sep/2024 16:00:54] "GET /v2/images/bb02f9bc-203a-4655-af53-c6e8f6f231e0 HTTP/1.1" 200 2037 0.015839 /var/log/glance/api.log:2024-09-12 16:00:54.195 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Determining version of request: GET /v2/schemas/image Accept: */* process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:44 /var/log/glance/api.log:2024-09-12 16:00:54.195 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Using url versioning process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:57 /var/log/glance/api.log:2024-09-12 16:00:54.196 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Matched version: v2 process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:69 /var/log/glance/api.log:2024-09-12 16:00:54.196 3622 DEBUG glance.api.middleware.version_negotiation [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] new path /v2/schemas/image process_request /usr/lib/python3.9/site-packages/glance/api/middleware/version_negotiation.py:70 /var/log/glance/api.log:2024-09-12 16:00:54.199 3622 INFO eventlet.wsgi.server [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] 10.61.157.80 - - [12/Sep/2024 16:00:54] "GET /v2/schemas/image HTTP/1.1" 200 6283 0.004117 /var/log/nova/nova-api.log:2024-09-12 16:00:52.467 4059 DEBUG nova.api.openstack.wsgi [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Action: 'action', calling method: <bound method ServersController._action_create_image of <nova.api.openstack.compute.servers.ServersController object at 0x7fe02caee610>>, body: {"createImage": {"name": "cirrosinstance"}} _process_stack /usr/lib/python3.9/site-packages/nova/api/openstack/wsgi.py:511 /var/log/nova/nova-api.log:2024-09-12 16:00:52.478 4059 DEBUG nova.compute.api [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] [instance: 372d943f-f566-48e5-ab3c-da1773f7b8b6] Fetching instance by UUID get /usr/lib/python3.9/site-packages/nova/compute/api.py:2978 /var/log/nova/nova-api.log:2024-09-12 16:00:52.482 4059 DEBUG oslo_concurrency.lockutils [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Acquiring lock "d3f08bea-8e00-4a4d-b90f-adf8de60f882" by "nova.context.set_target_cell.<locals>.get_or_set_cached_cell_and_set_connections" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404 /var/log/nova/nova-api.log:2024-09-12 16:00:52.483 4059 DEBUG oslo_concurrency.lockutils [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Lock "d3f08bea-8e00-4a4d-b90f-adf8de60f882" acquired by "nova.context.set_target_cell.<locals>.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409 /var/log/nova/nova-api.log:2024-09-12 16:00:52.483 4059 DEBUG oslo_concurrency.lockutils [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Lock "d3f08bea-8e00-4a4d-b90f-adf8de60f882" "released" by "nova.context.set_target_cell.<locals>.get_or_set_cached_cell_and_set_connections" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423 /var/log/nova/nova-api.log:2024-09-12 16:00:52.484 4059 DEBUG oslo_concurrency.lockutils [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Acquiring lock "d3f08bea-8e00-4a4d-b90f-adf8de60f882" by "nova.context.set_target_cell.<locals>.get_or_set_cached_cell_and_set_connections" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404 /var/log/nova/nova-api.log:2024-09-12 16:00:52.485 4059 DEBUG oslo_concurrency.lockutils [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Lock "d3f08bea-8e00-4a4d-b90f-adf8de60f882" acquired by "nova.context.set_target_cell.<locals>.get_or_set_cached_cell_and_set_connections" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409 /var/log/nova/nova-api.log:2024-09-12 16:00:52.485 4059 DEBUG oslo_concurrency.lockutils [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] Lock "d3f08bea-8e00-4a4d-b90f-adf8de60f882" "released" by "nova.context.set_target_cell.<locals>.get_or_set_cached_cell_and_set_connections" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423 /var/log/nova/nova-api.log:2024-09-12 16:00:52.955 4059 INFO nova.compute.api [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] [instance: 372d943f-f566-48e5-ab3c-da1773f7b8b6] Attempting to quiesce instance before volume snapshot. /var/log/nova/nova-api.log:2024-09-12 16:00:52.976 4059 INFO nova.compute.api [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] [instance: 372d943f-f566-48e5-ab3c-da1773f7b8b6] Skipping quiescing instance: QEMU guest agent is not enabled. /var/log/nova/nova-api.log:2024-09-12 16:00:53.025 4059 DEBUG nova.compute.api [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] [instance: 372d943f-f566-48e5-ab3c-da1773f7b8b6] Creating snapshot from volume 0aaa3dfe-ba0e-4306-b37a-10d847b1c264. snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/api.py:3512 /var/log/nova/nova-api.log:2024-09-12 16:00:54.203 4059 INFO nova.api.openstack.requestlog [None req-c2de5695-40eb-42c6-8b8b-b5278b222da7 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - default default] 10.61.157.80 "POST /v2.1/servers/372d943f-f566-48e5-ab3c-da1773f7b8b6/action" status: 202 len: 52 microversion: 2.45 time: 1.899570 As you can see there are only two logs with an ERROR state, and both appear to be related to identity as you mentioned: /var/log/cinder/volume.log:2024-09-12 16:00:55.713 3607 ERROR cinder.volume.drivers.remotefs [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Call to Nova to create snapshot failed: keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-6cc2a596-b980-43b9-b3dc-5eff336a4949) /var/log/cinder/volume.log:2024-09-12 16:00:55.730 3607 ERROR oslo_messaging.rpc.server [req-c2de5695-40eb-42c6-8b8b-b5278b222da7 req-07ccc3d6-dbfd-4ef4-b11a-253c44cba180 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Exception during message handling: keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-6cc2a596-b980-43b9-b3dc-5eff336a4949) I did some digging, and found a few sites that talk about needing to have these set (which I already do) export OS_PROJECT_DOMAIN_NAME=Default export OS_IDENTITY_API_VERSION=3 I thought it might be a case sensitivity thing, as the domain listed for each user from the openstack user list --long command is "default". . .But I changed that in my sourced file for authentication, and re-authenticated, and that didn't make any difference Where else should I look? Thanks again for all your help!
I tried something a little different, and was able to get some success. When I create a volume with the nfs volume type, that is when I get the errors on creating snapshots (nfs is what I currently have set as the default_volume_type) However, when I create a volume with the iscsi volume type (which is a local lvm storage), If I create an instance from that volume, then the snapshot works without issues. I had seen this entry in my cinder.conf file: # Enable support for snapshots on the NFS driver. Platforms using libvirt # <1.2.7 will encounter issues with this feature. (boolean value) #nfs_snapshot_support = false nfs_snapshot_support = true Which I had set to true several days ago, as that seemed that it might be relevant, but it doesn't seem to let snapshots work when backed by the nfs storage. Do I have something misconfigured in my nfs configuration in cinder? Here is what I have configured for both lvm and nfs in my cinder.conf file: [lvm] volume_backend_name=lvm volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver target_ip_address=10.61.157.80 target_helper=lioadm volume_group=cinder-volumes volumes_dir=/var/lib/cinder/volumes [nfs] nfs_shares_config = /etc/cinder/nfs_shares volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_mount_options = rw,_netdev,bg,hard,nointr,tcp,vers=3,noatime,rsize=65536,wsize=65536
Happy to hear that you made progress and narrowed it down to configuration for Cinder. I think Cinder folks might be able to help here better. Rajat might be able to suggest the config part. On Fri, Sep 13, 2024 at 9:58 PM <collinl@churchofjesuschrist.org> wrote:
I tried something a little different, and was able to get some success.
When I create a volume with the nfs volume type, that is when I get the errors on creating snapshots (nfs is what I currently have set as the default_volume_type)
However, when I create a volume with the iscsi volume type (which is a local lvm storage), If I create an instance from that volume, then the snapshot works without issues.
I had seen this entry in my cinder.conf file: # Enable support for snapshots on the NFS driver. Platforms using libvirt # <1.2.7 will encounter issues with this feature. (boolean value) #nfs_snapshot_support = false nfs_snapshot_support = true
Which I had set to true several days ago, as that seemed that it might be relevant, but it doesn't seem to let snapshots work when backed by the nfs storage.
Do I have something misconfigured in my nfs configuration in cinder?
Here is what I have configured for both lvm and nfs in my cinder.conf file: [lvm] volume_backend_name=lvm volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver target_ip_address=10.61.157.80 target_helper=lioadm volume_group=cinder-volumes volumes_dir=/var/lib/cinder/volumes
[nfs] nfs_shares_config = /etc/cinder/nfs_shares volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_mount_options = rw,_netdev,bg,hard,nointr,tcp,vers=3,noatime,rsize=65536,wsize=65536
On Sun, Sep 15, 2024 at 10:35 PM Amit Uniyal <auniyal@redhat.com> wrote:
Happy to hear that you made progress and narrowed it down to configuration for Cinder. I think Cinder folks might be able to help here better.
Rajat might be able to suggest the config part.
On Fri, Sep 13, 2024 at 9:58 PM <collinl@churchofjesuschrist.org> wrote:
I tried something a little different, and was able to get some success.
When I create a volume with the nfs volume type, that is when I get the errors on creating snapshots (nfs is what I currently have set as the default_volume_type)
However, when I create a volume with the iscsi volume type (which is a local lvm storage), If I create an instance from that volume, then the snapshot works without issues.
I had seen this entry in my cinder.conf file: # Enable support for snapshots on the NFS driver. Platforms using libvirt # <1.2.7 will encounter issues with this feature. (boolean value) #nfs_snapshot_support = false nfs_snapshot_support = true
This ^^ is the correct setting, but maybe you're not setting it in the correct location. It needs to be specified in the backend configuration.
Which I had set to true several days ago, as that seemed that it might be relevant, but it doesn't seem to let snapshots work when backed by the nfs storage.
Do I have something misconfigured in my nfs configuration in cinder?
Here is what I have configured for both lvm and nfs in my cinder.conf file: [lvm] volume_backend_name=lvm volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver target_ip_address=10.61.157.80 target_helper=lioadm volume_group=cinder-volumes volumes_dir=/var/lib/cinder/volumes
[nfs] nfs_shares_config = /etc/cinder/nfs_shares volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_mount_options = rw,_netdev,bg,hard,nointr,tcp,vers=3,noatime,rsize=65536,wsize=65536
This is where you should set "nfs_snapshot_support = true" (in the [nfs] section). Alan
[nfs] nfs_shares_config = /etc/cinder/nfs_shares volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_mount_options = rw,_netdev,bg,hard,nointr,tcp,vers=3,noatime,rsize=65536,wsize=65536 This is where you should set "nfs_snapshot_support = true" (in the [nfs] section). Alan Alan,
Thank you very much for your response. I had somehow missed your response amongst all the other traffic, so my apologies for the slow response. I did try to move the "nfs_snapshot_support = true" to the [nfs] section, and restarted cinder. . .but snapshots still fail. The only errors in the logs are still just this about not finding the domain: /var/log/cinder/volume.log:2024-09-20 08:03:10.603 1308361 ERROR cinder.volume.drivers.remotefs [req-d3f96912-970f-4283-b829-1786c5dea714 req-735aa8da-cb5a-4147-b4c1-3174c43d4934 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Call to Nova to create snapshot failed: keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-2c295207-2597-477f-b2d4-643706bf3d27) /var/log/cinder/volume.log:2024-09-20 08:03:10.631 1308361 ERROR oslo_messaging.rpc.server [req-d3f96912-970f-4283-b829-1786c5dea714 req-735aa8da-cb5a-4147-b4c1-3174c43d4934 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Exception during message handling: keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-2c295207-2597-477f-b2d4-643706bf3d27) Any other ideas where I should look to get snapshots working on nfs backed storage? Is it perhaps some sort of mismatch between nova/cinder/keystone versions? I did the install from scratch using the bobcat release. Here are the versions I have installed: openstack-keystone.noarch 1:24.0.0-1.el9s @centos-openstack-bobcat openstack-cinder.noarch 1:23.2.0-1.el9s @centos-openstack-bobcat openstack-nova-api.noarch 1:28.3.0-1.el9s @centos-openstack-bobcat openstack-nova-common.noarch 1:28.3.0-1.el9s @centos-openstack-bobcat openstack-nova-conductor.noarch 1:28.3.0-1.el9s @centos-openstack-bobcat openstack-nova-novncproxy.noarch 1:28.3.0-1.el9s @centos-openstack-bobcat openstack-nova-scheduler.noarch 1:28.3.0-1.el9s @centos-openstack-bobcat
AFAIR cinder needs to interact with nova to complete snapshot in nfs backend. Did you configure the [nova] options in your cinder.conf ? If you did then it'd be probably helpful if you can share the options configured in that block. I expect the section may have options in the following example. --- [nova] auth_type = password auth_url = <url of your keystone endpoint> username = nova user_domain_name = Default project_name = service project_domain_name = Default password = <password of nova user> --- # you can use cinder user instead of nova user On 9/20/24 23:19, collinl@churchofjesuschrist.org wrote:
[nfs] nfs_shares_config = /etc/cinder/nfs_shares volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_mount_options = rw,_netdev,bg,hard,nointr,tcp,vers=3,noatime,rsize=65536,wsize=65536 This is where you should set "nfs_snapshot_support = true" (in the [nfs] section). Alan Alan,
Thank you very much for your response. I had somehow missed your response amongst all the other traffic, so my apologies for the slow response.
I did try to move the "nfs_snapshot_support = true" to the [nfs] section, and restarted cinder. . .but snapshots still fail. The only errors in the logs are still just this about not finding the domain:
/var/log/cinder/volume.log:2024-09-20 08:03:10.603 1308361 ERROR cinder.volume.drivers.remotefs [req-d3f96912-970f-4283-b829-1786c5dea714 req-735aa8da-cb5a-4147-b4c1-3174c43d4934 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Call to Nova to create snapshot failed: keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-2c295207-2597-477f-b2d4-643706bf3d27) /var/log/cinder/volume.log:2024-09-20 08:03:10.631 1308361 ERROR oslo_messaging.rpc.server [req-d3f96912-970f-4283-b829-1786c5dea714 req-735aa8da-cb5a-4147-b4c1-3174c43d4934 956bef8f3ca34d9bbb5b3bb8876890fd b3001e31df2947e78c5fda1e0d8479b0 - - - -] Exception during message handling: keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-2c295207-2597-477f-b2d4-643706bf3d27)
Any other ideas where I should look to get snapshots working on nfs backed storage?
Is it perhaps some sort of mismatch between nova/cinder/keystone versions? I did the install from scratch using the bobcat release.
Here are the versions I have installed: openstack-keystone.noarch 1:24.0.0-1.el9s @centos-openstack-bobcat openstack-cinder.noarch 1:23.2.0-1.el9s @centos-openstack-bobcat openstack-nova-api.noarch 1:28.3.0-1.el9s @centos-openstack-bobcat openstack-nova-common.noarch 1:28.3.0-1.el9s @centos-openstack-bobcat openstack-nova-conductor.noarch 1:28.3.0-1.el9s @centos-openstack-bobcat openstack-nova-novncproxy.noarch 1:28.3.0-1.el9s @centos-openstack-bobcat openstack-nova-scheduler.noarch 1:28.3.0-1.el9s @centos-openstack-bobcat
Takashi Kajinami wrote:
AFAIR cinder needs to interact with nova to complete snapshot in nfs backend. Did you configure the [nova] options in your cinder.conf ? If you did then it'd be probably helpful if you can share the options configured in that block. I expect the section may have options in the following example. --- [nova] auth_type = password auth_url = <url of your keystone endpoint> username = nova user_domain_name = Default project_name = service project_domain_name = Default password = <password of nova user> --- # you can use cinder user instead of nova user
Thank you Takashi. Yes, I do have the [nova] options configured. The only option I don't have that you have listed is the auth_type. This is what I have for the [nova] section: [nova] auth_url=http://10.61.157.80:5000 username=nova user_domain_name=Default password=XXXXXX project_name=services project_domain_name=Default And, I can report success after adding in the auth_type=password. After making that change, and restarting, the snapshot creation when using nfs as the backend is successful! Thank you!
participants (4)
-
Alan Bishop
-
Amit Uniyal
-
collinl@churchofjesuschrist.org
-
Takashi Kajinami