Windows2012 VM image importing and Instance Launch Failure
KK CHN
kkchn.in at gmail.com
Wed Aug 25 06:18:48 UTC 2021
Thank you all.
I am able to get the Single disk Windows VM exported from HyperV and
imported to OpenStack(Ussuri, Glance and Qemu KVM ) Instance launched and
its showing the status running in the PowerState Tab of Horizon dashboard.
(need to check with the administrator user to login and confirm the VM is
working as expected . )
My second phase : I need to perform importing multiple disk Windows
VM to my OpenStack setup .
( For single disk Windows vhdx file, I 've converted it to qcow2 file and
imported to OpenStack as the steps I posted in my previous emails. )
Now how can I import Multiple disk Windows VM to OpenStack.
First Step is to convert the vhdx files exported from HyperV to qcow2
files.
After this step , what steps need to be performed to bring a multi disk
Windows VM to OpeStack ?
For Eg: I have a Windows VM
1. with Application server 1TB single disk image exported from hyperV.
This I can import to my OpenStack setup with the steps I followed in my
earlier emails as it is a single disk image.
2. A Database server for the above application with 700 GB in multiple
disks. ( 100 and 600 GB disks images exported from HyperV and in vhdx
format for DB server)
How to bring this server imported to my OpeStack setup (ussuri, glance
and QEMU KVM). As this is from Windows HyperV I am new to multidisk
environment from Windows.
( for Linux VMs exported from oVirt(RHVM) I am able to import multidisk
VMs to OpenStack as they are in LVM and I have to mount it with VG name and
adding entries in /etc/fstab file )
But in Windows multi disks how to perform this ? what steps I have to
perform to import this to OpenStack. Any hints or suggestions most
welcome..
Kris
On Tue, Aug 24, 2021 at 1:40 PM Eugen Block <eblock at nde.ag> wrote:
> Of course you need a running nova-conductor, how did you launch VMs before?
>
>
> Zitat von KK CHN <kkchn.in at gmail.com>:
>
> > yes. I too suspect this not the problem with image. I have seen the
> logs:
> > it throwing Errors which pasted here.
> >
> > Its waiting for nova-conductor service as in the logs. Is this the
> issue?
> > Do I need to explictly statrt conductor service or some other service ?
> > nova-compute logs pasted below the lines. share your thoughts what may
> > the issue.
> >
> >
> > On Fri, Aug 20, 2021 at 8:11 AM Mohammed Naser <mnaser at vexxhost.com>
> wrote:
> >
> >> I suggest that you have a look at the compute node logs when it fails to
> >> spawn. I suspect the problem is not Your images but something inside
> >> openstack
> >>
> >>
> > tail: no files remaining
> > cloud at Meghctrl1:~$ sudo tail -f /var/log/nova/nova-compute.log
> > [sudo] password for cloud:
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > transport_options=transport_options)
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 642, in _send
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > call_monitor_timeout)
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 531, in wait
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > message = self.waiters.get(msg_id, timeout=timeout)
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 409, in get
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > 'to messageID %s' % msg_id)
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a
> > reply to message ID 7a4e3a4acad3403a9966570cc259f7b7
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > 2021-08-19 12:00:01.467 2847961 WARNING oslo.service.loopingcall [-]
> > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
> > outlasted interval by 50.03 sec
> > 2021-08-19 12:00:34.012 2847812 WARNING nova.conductor.api
> > [req-afca4d09-bf9c-4bbc-ae88-22008d12b978 - - - - -] Timed out waiting
> > for nova-conductor. Is it running? Or did this service start before
> > nova-conductor? Reattempting establishment of nova-conductor
> > connection...: oslo_messaging.exceptions.MessagingTimeout: Timed out
> > waiting for a reply to message ID 9f5b593361b34c8d91585c00e0514047
> > 2021-08-19 12:00:59.480 2847961 DEBUG oslo_service.periodic_task
> > [req-58a0e182-6e56-4695-b4db-75624ca69c7b - - - - -] Running periodic
> > task ComputeManager._check_instance_build_time run_periodic_tasks
> > /usr/local/lib/python3.7/dist-packages/oslo_service/periodic_task.py:211
> > 2021-08-19 12:00:59.481 2847961 DEBUG oslo_service.periodic_task
> > [req-58a0e182-6e56-4695-b4db-75624ca69c7b - - - - -] Running periodic
> > task ComputeManager._sync_scheduler_instance_info run_periodic_tasks
> > /usr/local/lib/python3.7/dist-packages/oslo_service/periodic_task.py:211
> > 2021-08-19 12:01:01.499 2847961 WARNING oslo.service.loopingcall [-]
> > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
> > outlasted interval by 50.03 sec
> > 2021-08-19 12:01:34.040 2847812 WARNING nova.conductor.api
> > [req-afca4d09-bf9c-4bbc-ae88-22008d12b978 - - - - -] Timed out waiting
> > for nova-conductor. Is it running? Or did this service start before
> > nova-conductor? Reattempting establishment of nova-conductor
> > connection...: oslo_messaging.exceptions.MessagingTimeout: Timed out
> > waiting for a reply to message ID e8e478be640e4ecab8a3d27c01960440
> >
> >
> > essage ID d620b1b3938e4428a0500c00df8d68ee
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > [req-58a0e182-6e56-4695-b4db-75624ca69c7b - - - - -] Error during
> > ComputeManager._cleanup_expired_console_auth_tokens:
> > oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a
> > reply to message ID a493182a404242b79c8f11f8ec350e36
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > Traceback (most recent call last):
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 405, in get
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > return self._queues[msg_id].get(block=True, timeout=timeout)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File "/usr/local/lib/python3.7/dist-packages/eventlet/queue.py", line
> > 322, in get
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > return waiter.wait()
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File "/usr/local/lib/python3.7/dist-packages/eventlet/queue.py", line
> > 141, in wait
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > return get_hub().switch()
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File "/usr/local/lib/python3.7/dist-packages/eventlet/hubs/hub.py",
> > line 298, in switch
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > return self.greenlet.switch()
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> _queue.Empty
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > During handling of the above exception, another exception occurred:
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > Traceback (most recent call last):
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> "/usr/local/lib/python3.7/dist-packages/oslo_service/periodic_task.py",
> > line 216, in run_periodic_tasks
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > task(self, context)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File "/usr/local/lib/python3.7/dist-packages/nova/compute/manager.py",
> > line 10450, in _cleanup_expired_console_auth_tokens
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > objects.ConsoleAuthToken.clean_expired_console_auths(context)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> "/usr/local/lib/python3.7/dist-packages/oslo_versionedobjects/base.py",
> > line 177, in wrapper
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > args, kwargs)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File "/usr/local/lib/python3.7/dist-packages/nova/conductor/rpcapi.py",
> > line 243, in object_class_action_versions
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > args=args, kwargs=kwargs)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/rpc/client.py",
> > line 179, in call
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > transport_options=self.transport_options)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/transport.py",
> > line 128, in _send
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > transport_options=transport_options)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 654, in send
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > transport_options=transport_options)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 642, in _send
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > call_monitor_timeout)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 531, in wait
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > message = self.waiters.get(msg_id, timeout=timeout)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 409, in get
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > 'to message ID %s' % msg_id)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a
> > reply to message ID a493182a404242b79c8f11f8ec350e36
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > 2021-08-19 12:27:01.681 2847961 DEBUG oslo_service.periodic_task
> > [req-58a0e182-6e56-4695-b4db-75624ca69c7b - - - - -] Running periodic
> > task ComputeManager._check_instance_build_time run_periodic_tasks
> > /usr/local/lib/python3.7/dist-packages/oslo_service/periodic_task.py:211
> > 2021-08-19 12:27:01.687 2847961 DEBUG oslo_service.periodic_task
> > [req-58a0e182-6e56-4695-b4db-75624ca69c7b - - - - -] Running periodic
> > task ComputeManager._sync_scheduler_instance_info run_periodic_tasks
> > /usr/local/lib/python3.7/dist-packages/oslo_service/periodic_task.py:211
> > 2021-08-19 12:27:02.368 2847961 WARNING oslo.service.loopingcall [-]
> > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
> > outlasted interval by 50.03 sec
> >
> >
> >
> > Error-nov-log-file_while_sch ... e_in _horizaon_dashboard.txt
> > Open with
> > Displaying Error-nov-log-file_while_scheduling_Windows_VM_instance_in
> > _horizaon_dashboard.txt.
> >
> >
> >
> >
> >
> >> If I was to guess it’s probably missing UEFI firmware packages :)
> >>
> >> On Wed, Aug 18, 2021 at 9:17 AM KK CHN <kkchn.in at gmail.com> wrote:
> >>
> >>> Error : failed to perform requested operation on instance "WindowsVM
> "the
> >>> instance has error status. Please try again later [Error exceeded
> maximum
> >>> number of retries. Exhausted all hosts available for retrying build
> >>> failures for instance e3d5c095-7d26-4b1e-89d1-d1a6e20a45041
> >>>
> >>> I am trying to import a WIndows2012 Single disk VM, to OpenStack
> Ussuri,
> >>> with glance and Qemu KVM.
> >>>
> >>> In bare machine KVM I am able to import and boot this Windows VM which
> >>> exported from rhevm hypervisor as vhdx image.
> >>> what I have done is
> >>>
> >>> 1. converted this windows image from vhdx to qcow2
> >>> 2. root at MeghCtrol1:/home/cloud/CMOBB_APP#cirt-install --name WINDOWS
> >>> --ram=1048 --vcups=1 --cpu host --hvm --dick
> >>> path=BackUP2_CMAPP_disk_1_Windows_qcow2_imagefile,device=disk,
> >>> format=qcow2,bus=virtio --graphics vnc --boot uefi
> >>>
> >>> This uploaded the qcow2 image of WindowsVM to KVM hypervisor and its
> >>> working.
> >>>
> >>> But when I do importing to openstack unable to launch instance from
> >>> the image .
> >>>
> >>> These are the steps I performed..
> >>>
> >>> 1. openstack image create "WindowsVM" --file CMAPP_disk_1.qcow2
> >>> --disk-format qcow2 --container-format bare --public
> >>>
> >>> 4.openstack image set --property hw_firmware_type=uefi --property
> >>> os_secure_boot=required WindowsVM
> >>>
> >>> 5.openstack image set --property hw_firmware_type=uefi --property
> >>> hw_disk_bus=ide WindowsVM
> >>>
> >>> 6.openstack image show WindowsVM
> >>>
> >>> 7. root at dmzcloud:/home/cloud# openstack image show WindowsVM|grep
> >>> "properties"
> >>> | properties | hw_disk_bus='ide', hw_firmware_type='uefi',
> >>> os_hash_algo='sha512',
> >>>
> >>>
> os_hash_value='753ee596980409e1e72d6d020c8219c56a6ada8b43f634fb575c594a245725a398e45982c0a1ad72b3fc3451cde62cceb9ff22be044863b31ecdd7893b049349',
> >>> os_hidden='False', os_secure_boot='required',
> >>> owner_specified.openstack.md5='',
> >>> owner_specified.openstack.object='images/WindowsVM',
> >>> owner_specified.openstack.sha256='' |
> >>> root at dmzcloud:/home/cloud#
> >>>
> >>>
> >>> Then I logged into horizon dashboard, from the images selected the
> >>> imported image and try to launch the instance. With a Flavour of 550
> GB
> >>> disk, 4 vcpus and 8GB Ram ..
> >>>
> >>> The instance spawning ran for 30 minutes and throws the error which I
> >>> pasted first in the right top corner of horizon dashboard.
> >>>
> >>> How to solve this error and boot the Windows machine successfully ..
> >>>
> >>>
> >>> """
> >>> Error : failed to perform requested operation on instance "WindowsVM
> "the
> >>> instance has error status. Please try again later [Error exceeded
> maximum
> >>> number of retries. Exhausted all hosts available for retrying build
> >>> failures for instance e3d5c095-7d26-4b1e-89d1-d1a6e20a45041
> >>>
> >>>
> >>> """
> >>>
> >>> Any help highly appreciated.
> >>>
> >>> Kris
> >>>
> >>> --
> >> Mohammed Naser
> >> VEXXHOST, Inc.
> >>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210825/56dd7f76/attachment-0001.html>
More information about the openstack-discuss
mailing list