As the listed operations [migrate, resize, snapshot] are from nova, you should start looking in nova service logs, and that should direct you to look for other project logs(w.r.t to failed calls).

From snapshot error:
`Error: Unable to create snapshot. Details
Invalid input received: Invalid volume: Volume e12b3ff8-9870-45f9-b59a-c6fb194ed517 status must be available, in-use, but current status is: attaching. (HTTP 400) (Request-ID: req-499d3bf9-97a0-48b2-8565-254a87287cff) (HTTP 400) (Request-ID: req-d0cd5aed-894e-40b2-8a2f-4b22188ec372)

as you are able to use VM, so its not BFV.
`
can you tell how did you attach the volume ? On successful attachment the volume status should be updated to "in-use" and not "attaching", and you should be able to mount and use volume from inside VM.
is this by automation? create-attach-snapshot ? In that case, you might have called snapshot before attachment finished, hence attaching.

Also, considering you are new to openstack, the amount of information, and chain, relase you have provided is excellent.

Regards

On Wed, Sep 11, 2024 at 2:04 AM <collinl@churchofjesuschrist.org> wrote:
As the subject says, I am new to OpenStack, and have spun up a test cluster with one control node, and two compute nodes.  It seems to work for several tasks (i.e. I can spin up a cirros instance on both of the compute nodes, which seems to work just fine)  However, when I attempt to migrate an instance from one compute node to the other, the instance goes in to an error state.  (In order to get it running again, I have to set the state to active, then reboot the instance)

I am not sure which logs will have the required information to help debug this issue, but I have looked through nova and cinder and keystone logs.  Nothing I am seeing is obvious to me as to the cause of the failures.

Also taking a snapshot seems to fail as well, with this error:
Error: Unable to create snapshot. Details
Invalid input received: Invalid volume: Volume e12b3ff8-9870-45f9-b59a-c6fb194ed517 status must be available, in-use, but current status is: attaching. (HTTP 400) (Request-ID: req-499d3bf9-97a0-48b2-8565-254a87287cff) (HTTP 400) (Request-ID: req-d0cd5aed-894e-40b2-8a2f-4b22188ec372)

Looking at the volumes in my environment, both volumes show a status of "attaching" (both are for a test cirros instance, both are type nfs, and both instances seem to work just fine (meaning I can connect to them either on the console or via ssh and run commands without errors)

Is the reason that each of those tasks is failing because of the "attaching" status of the volumes?

I spun up the cluster using packstack and all the nodes are installed on Rocky Linux 9.4.  I added an nfs backend after packstack was all done, and that does seem to be working, as the ephemeral volumes used to spin up the test instances does seem to be on the nfs storage, rather than the default lvm storage backend.

 I have installed openstack using the bobcat release of openstack (from the centos-openstack-bobcat repository). 

I have been crawling through the standalone install documentation to see if I can find something that explains why the instance migration isn't working, but, again, nothing is jumping out at me.

Any guidance on where I should be looking, or advice on what I may be missing would be greatly appreciated.