Windows2012 VM image importing and Instance Launch Failure
KK CHN
kkchn.in at gmail.com
Wed Sep 1 12:32:27 UTC 2021
On Fri, Aug 27, 2021 at 11:00 AM <DHilsbos at performair.com> wrote:
> Kris;
>
> First, you can add properties during the create command, so steps 1
> through 3 can become:
> # openstack image create “M_Windows_VM_NAME_Blaah” --file
> Windows_VM_disk.qcow2 \
> --disk-format qcow2 --container-format bare --public --property
> hw_firmware_type=uefi \
> --property os_secure_boot=required --property hw_disk_bios=ide
>
> I am able to import the WIndows VM, which was exported from (Rhvm
oVirt) to my openstack(ussuri, qemu-kvm, glance, cinder-ceph ) booted
the first bootable disk of this multi disk Windows 2012 RC VM.
It has additionally a data disk which is of 9GB of data of 600 GB
volume size originally.
> Secondly, no. Only the boot image (C: drive) needs the properties. So
> any additional disks for the same VM can use a command like this:
>
I executed this step for the second disk qcow2 image on my Controller
node.
# openstack image create “M_Windows_VM_NAME_DataDisk2” --file
> Windows_VM_disk.qcow2 --disk-format qcow2 --public
>
>
Then the image appears in the Horizon Image section. I tried to create
Volume through Horizon GUI with 600 GB disk space allocated , it ran
for 10 t0 20 minutes and went to Error state.
multiple attempts I have made by deleting each error volume and creating
new volume but same result.
So I tried to do it through CLI
First I listed the images uploaded : by
#openstack image list
c6695789-8524-446c-98c9-282b2880551c | CMDALocalDBWindowsdisk2 | active
// This is the data disk image
root at Meghctrl1:/home/cloud# #openstack volume create --image
c6695789-8524-446c-98c9-282b2880551c --size 600 --availability-zone nova
CMDA_DB_SecDisk_Vol
root at Meghctrl1:/home/cloud# openstack volume list
+--------------------------------------+----------------------+-----------+------+-------------+
| ID | Name | Status |
Size | Attached to |
+--------------------------------------+----------------------+-----------+------+-------------+
| 32f9452f-368f-4f26-b70d-c066c4c7e01c | CMDA_DB_SecDisk_Vol | available |
600 | |
Now I have gone to the Horizon dashboard. I tried to attach this Volume
CMDA_DB_SecDisk_Vol to My running Windows VM which is the first disk
which was imported and booted successfully .
I got the Dashboard Status alert while doing this : " Info: Attaching
volume *CMDA_DB_SecDisk_Vol(32f9452f-368f-4f26-b70d-c066c4c7e01c) to
instance 6630c9c8-135e-45b7-958b-c563337eb9c4 on /dev/hdb "*
But This is not attached to the Running Windows VM.
*You can see it is attached to none *
root at Meghctrl1:/home/cloud# openstack volume list
+--------------------------------------+----------------------+-----------+------+-------------+
| ID | Name | Status |
Size | Attached to |
+--------------------------------------+----------------------+-----------+------+-------------+
| 32f9452f-368f-4f26-b70d-c066c4c7e01c | CMDA_DB_SecDisk_Vol | available |
600 | |
| 0c404d4c-1685-4a5a-84ae-1ee413f02f68 | cloudtest_DMS1 | error |
102 | |
| e9949f4f-b23a-4a30-8620-5a8d234b523b | DMS_VM_APPLICATION-2 | available |
100 |
Here you can see it attached to None. *Also in the horizon dashboard the
running instance I clicked for more information : It shows no Volumes
attached to this instance. *
TestDB_CMDA_disk1
Create Snapshot
<http://10.236.247.230/project/images/6630c9c8-135e-45b7-958b-c563337eb9c4/create/>
<http://10.236.247.230/project/instances/6630c9c8-135e-45b7-958b-c563337eb9c4/#>
- Overview
<http://10.236.247.230/project/instances/6630c9c8-135e-45b7-958b-c563337eb9c4/?tab=instance_details__overview>
- Interfaces
<http://10.236.247.230/project/instances/6630c9c8-135e-45b7-958b-c563337eb9c4/?tab=instance_details__interfaces>
- Log
<http://10.236.247.230/project/instances/6630c9c8-135e-45b7-958b-c563337eb9c4/?tab=instance_details__log>
- Console
<http://10.236.247.230/project/instances/6630c9c8-135e-45b7-958b-c563337eb9c4/?tab=instance_details__console>
- Action Log
<http://10.236.247.230/project/instances/6630c9c8-135e-45b7-958b-c563337eb9c4/?tab=instance_details__audit>
NameTestDB_CMDA_disk1ID6630c9c8-135e-45b7-958b-c563337eb9c4Description-Project
IDb1bf4df41dc34e3ab5c3b0318158263dStatusActiveLockedFalseAvailability Zone
novaCreatedSept. 1, 2021, 12:24 p.m.Age5 hours, 9 minutesHostMeghctrl1Specs
------------------------------
Flavor NameMediumFlavor ID5bb6bc50-b2f7-49b2-8f01-7a631af5ff8cRAM4GBVCPUs2
VCPUDisk150GBIP Addresses
------------------------------
Tenant_NW172.16.100.248Security Groups
------------------------------
default
- ALLOW IPv6 from default
- ALLOW IPv4 icmp to 0.0.0.0/0
- ALLOW IPv4 to 0.0.0.0/0
- ALLOW IPv4 tcp from 0.0.0.0/0
- ALLOW IPv4 icmp from 0.0.0.0/0
- ALLOW IPv4 from default
- ALLOW IPv6 to ::/0
Metadata
------------------------------
Key NameTNSDCImage NameCMDALocalDBWindows1
<http://10.236.247.230/project/images/6437fde7-db0c-4298-9d92-1f70aae3c894/>Image
ID6437fde7-db0c-4298-9d92-1f70aae3c894Volumes Attached
------------------------------
Volume*No volumes attached.*
*Details of the CLI Created disk Volume details which is unable to attach
to the running Windows 2012 RC instance is as follows from Horizon GUI*
CMDA_DB_SecDisk_Vol
Edit Volume
<http://10.236.247.230/project/volumes/32f9452f-368f-4f26-b70d-c066c4c7e01c/update/>
<http://10.236.247.230/project/volumes/32f9452f-368f-4f26-b70d-c066c4c7e01c/#>
- Overview
<http://10.236.247.230/project/volumes/32f9452f-368f-4f26-b70d-c066c4c7e01c/?tab=volume_details__overview>
- Snapshots
<http://10.236.247.230/project/volumes/32f9452f-368f-4f26-b70d-c066c4c7e01c/?tab=volume_details__snapshots_tab>
NameCMDA_DB_SecDisk_VolID32f9452f-368f-4f26-b70d-c066c4c7e01cProject ID
b1bf4df41dc34e3ab5c3b0318158263dStatusAvailableGroup-Specs
------------------------------
Size600 GiBType__DEFAULT__BootableNoEncryptedNoCreatedSept. 1, 2021, 4:07
p.m.Attachments
------------------------------
Attached To*Not attached*Volume Source
------------------------------
Image
<http://10.236.247.230/project/volumes/32f9452f-368f-4f26-b70d-c066c4c7e01c/>
Metadata
------------------------------
None
What am I doing wrong here ? : any hints or suggestions to add this data
disk to the running WIndows instance are most welcome.
Or do I need to add the volume to the instance through CLI, from the
docs I am seeing this. Will this work for Running Windows VM instances
also ?
openstack server add volume 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 \
573e024d-5235-49ce-8332-be1576d323f8 --device /dev/vdb
*Will you advice to attaching the Volume through CLI like the
following *
#openstack server add volume 6630c9c8-135e-45b7-958b-c563337eb9c4
32f9452f-368f-4f26-b70d-c066c4c7e01c
--device /dev/hdb
* Will it solve this volume attachment issue ? or seriously am I
missing any crucial parameters or steps for attaching the additional disk
volumes for Windows VMs?*
*Any hints most welcome: Any other information or specific log files I
need to post, kindly tell me know the log file names I can post . I am
doing it for the first time. *
Thanks in advance for your answers.
Kris
You might consider making sure that these additional images aren't marked
> as bootable.
>
> That said... Have you considered importing the disks directly into Cinder
> instead?
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Vice President – Information Technology
> Perform Air International Inc.
> DHilsbos at PerformAir.com
> www.PerformAir.com
>
>
> From: KK CHN [mailto:kkchn.in at gmail.com]
> Sent: Thursday, August 26, 2021 10:04 PM
> To: Dominic Hilsbos
> Subject: Re: Windows2012 VM image importing and Instance Launch Failure
>
> Hi Dominic,
> For Importing Single disk Windows VM I performed the following steps on
> the vhdx to qcow2 converted image
> Step 1:
> root at Meghctrl1:/home/cloud# openstack image create
> “M_Windows_VM_NAME_Blaah” --file Windows_VM_disk.qcow2
> --disk-format qcow2 --container-format bare --public
> Step 2:
> root at Meghctrl1:/home/cloud# openstack image set --property
> hw_firmware_type=uefi --property
> os_secure_boot=required My_Windows_VM_NAME_Blaah
> Step 3:
> root at Meghctrl1:/home/cloud# openstack image set --property
> hw_firmware_type=uefi
> --property hw_disk_bios=ide MY_Windows_VM_NAME_Blaah
>
> And I am able to see the image in the Horizon dashboard "image" tab
> and launch the instance from there . Everything went fine and I am able to
> get the Windows Single Disk VM running.
>
> Query:
> If I have a Multi Disc Windows VM , do I need to perform all the above
> three steps for all the qcow2 converted images ? ( For eg: I have a
> Database Server , A Windows VM already running in HyperV with two disks (
> 100GB disk and 600 GB another disk image) . So in these two disks
> one will be WIndows OS and another will be attached disk without Windows
> OS right ?
> 1. For the first disk I will perform all the above three steps after
> converting it from vhdx to qcow2. So that image will appear in the
> horizon and I can launch an instance as I did early for Single Disk Windows
> VM.
> 2. For the 600 GB second disk (vhdx format) I will convert it to
> qcow2 and then do I need to perform all the above three steps exactly ?
> Or what changes I need to make in the above steps to import this disk as
> this is an additional attached disk in hyperV.
> Request your guidance or any suggestions in this regard.
>
> Thank You,
> Kris
>
>
>
> On Wed, Aug 25, 2021 at 9:12 PM <DHilsbos at performair.com> wrote:
> Kris;
>
> This shouldn’t be all that much more difficult than Linux, nor that much
> harder than a single-disk Windows VM.
>
> I would start by disabling the SQL service, as the OpenStack instance will
> boot once before you can add the data disks (at least it does for me when
> using Horizon).
>
> Export the disks, convert then, and import them, all as you normally
> would. Create the instance, as you normally would. The instance will
> start; shut it back down. Attach the additional volumes, and start the
> instance back up.
>
> It would be nice to have an option to not start the instance after
> creation.
>
> You could also investigate the “openstack server create” command; it has a
> --volume argument that you may be able to pass more than once, though I
> suspect not.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Vice President – Information Technology
> Perform Air International Inc.
> DHilsbos at PerformAir.com
> www.PerformAir.com
>
>
> From: KK CHN [mailto:kkchn.in at gmail.com]
> Sent: Tuesday, August 24, 2021 11:19 PM
> To: openstack-discuss at lists.openstack.org
> Subject: Re: Windows2012 VM image importing and Instance Launch Failure
>
> Thank you all.
>
> I am able to get the Single disk Windows VM exported from HyperV and
> imported to OpenStack(Ussuri, Glance and Qemu KVM ) Instance launched and
> its showing the status running in the PowerState Tab of Horizon dashboard.
>
> (need to check with the administrator user to login and confirm the VM is
> working as expected . )
>
> My second phase : I need to perform importing multiple disk
> Windows VM to my OpenStack setup .
>
> ( For single disk Windows vhdx file, I 've converted it to qcow2 file and
> imported to OpenStack as the steps I posted in my previous emails. )
>
> Now how can I import Multiple disk Windows VM to OpenStack.
>
> First Step is to convert the vhdx files exported from HyperV to qcow2
> files.
>
> After this step , what steps need to be performed to bring a multi
> disk Windows VM to OpeStack ?
>
> For Eg: I have a Windows VM
>
> 1. with Application server 1TB single disk image exported from hyperV.
> This I can import to my OpenStack setup with the steps I followed in my
> earlier emails as it is a single disk image.
>
>
> 2. A Database server for the above application with 700 GB in multiple
> disks. ( 100 and 600 GB disks images exported from HyperV and in vhdx
> format for DB server)
>
> How to bring this server imported to my OpeStack setup (ussuri, glance
> and QEMU KVM). As this is from Windows HyperV I am new to multidisk
> environment from Windows.
> ( for Linux VMs exported from oVirt(RHVM) I am able to import multidisk
> VMs to OpenStack as they are in LVM and I have to mount it with VG name and
> adding entries in /etc/fstab file )
>
> But in Windows multi disks how to perform this ? what steps I have to
> perform to import this to OpenStack. Any hints or suggestions most
> welcome..
>
> Kris
>
>
> On Tue, Aug 24, 2021 at 1:40 PM Eugen Block <eblock at nde.ag> wrote:
> Of course you need a running nova-conductor, how did you launch VMs before?
>
>
> Zitat von KK CHN <kkchn.in at gmail.com>:
>
> > yes. I too suspect this not the problem with image. I have seen the
> logs:
> > it throwing Errors which pasted here.
> >
> > Its waiting for nova-conductor service as in the logs. Is this the
> issue?
> > Do I need to explictly statrt conductor service or some other service ?
> > nova-compute logs pasted below the lines. share your thoughts what may
> > the issue.
> >
> >
> > On Fri, Aug 20, 2021 at 8:11 AM Mohammed Naser <mnaser at vexxhost.com>
> wrote:
> >
> >> I suggest that you have a look at the compute node logs when it fails to
> >> spawn. I suspect the problem is not Your images but something inside
> >> openstack
> >>
> >>
> > tail: no files remaining
> > cloud at Meghctrl1:~$ sudo tail -f /var/log/nova/nova-compute.log
> > [sudo] password for cloud:
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > transport_options=transport_options)
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 642, in _send
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > call_monitor_timeout)
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 531, in wait
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > message = self.waiters.get(msg_id, timeout=timeout)
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 409, in get
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > 'to messageID %s' % msg_id)
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a
> > reply to message ID 7a4e3a4acad3403a9966570cc259f7b7
> > 2021-08-19 11:59:59.477 2847961 ERROR oslo_service.periodic_task
> > 2021-08-19 12:00:01.467 2847961 WARNING oslo.service.loopingcall [-]
> > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
> > outlasted interval by 50.03 sec
> > 2021-08-19 12:00:34.012 2847812 WARNING nova.conductor.api
> > [req-afca4d09-bf9c-4bbc-ae88-22008d12b978 - - - - -] Timed out waiting
> > for nova-conductor. Is it running? Or did this service start before
> > nova-conductor? Reattempting establishment of nova-conductor
> > connection...: oslo_messaging.exceptions.MessagingTimeout: Timed out
> > waiting for a reply to message ID 9f5b593361b34c8d91585c00e0514047
> > 2021-08-19 12:00:59.480 2847961 DEBUG oslo_service.periodic_task
> > [req-58a0e182-6e56-4695-b4db-75624ca69c7b - - - - -] Running periodic
> > task ComputeManager._check_instance_build_time run_periodic_tasks
> > /usr/local/lib/python3.7/dist-packages/oslo_service/periodic_task.py:211
> > 2021-08-19 12:00:59.481 2847961 DEBUG oslo_service.periodic_task
> > [req-58a0e182-6e56-4695-b4db-75624ca69c7b - - - - -] Running periodic
> > task ComputeManager._sync_scheduler_instance_info run_periodic_tasks
> > /usr/local/lib/python3.7/dist-packages/oslo_service/periodic_task.py:211
> > 2021-08-19 12:01:01.499 2847961 WARNING oslo.service.loopingcall [-]
> > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
> > outlasted interval by 50.03 sec
> > 2021-08-19 12:01:34.040 2847812 WARNING nova.conductor.api
> > [req-afca4d09-bf9c-4bbc-ae88-22008d12b978 - - - - -] Timed out waiting
> > for nova-conductor. Is it running? Or did this service start before
> > nova-conductor? Reattempting establishment of nova-conductor
> > connection...: oslo_messaging.exceptions.MessagingTimeout: Timed out
> > waiting for a reply to message ID e8e478be640e4ecab8a3d27c01960440
> >
> >
> > essage ID d620b1b3938e4428a0500c00df8d68ee
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > [req-58a0e182-6e56-4695-b4db-75624ca69c7b - - - - -] Error during
> > ComputeManager._cleanup_expired_console_auth_tokens:
> > oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a
> > reply to message ID a493182a404242b79c8f11f8ec350e36
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > Traceback (most recent call last):
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 405, in get
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > return self._queues[msg_id].get(block=True, timeout=timeout)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File "/usr/local/lib/python3.7/dist-packages/eventlet/queue.py", line
> > 322, in get
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > return waiter.wait()
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File "/usr/local/lib/python3.7/dist-packages/eventlet/queue.py", line
> > 141, in wait
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > return get_hub().switch()
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File "/usr/local/lib/python3.7/dist-packages/eventlet/hubs/hub.py",
> > line 298, in switch
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > return self.greenlet.switch()
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> _queue.Empty
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > During handling of the above exception, another exception occurred:
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > Traceback (most recent call last):
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> "/usr/local/lib/python3.7/dist-packages/oslo_service/periodic_task.py",
> > line 216, in run_periodic_tasks
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > task(self, context)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File "/usr/local/lib/python3.7/dist-packages/nova/compute/manager.py",
> > line 10450, in _cleanup_expired_console_auth_tokens
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > objects.ConsoleAuthToken.clean_expired_console_auths(context)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> "/usr/local/lib/python3.7/dist-packages/oslo_versionedobjects/base.py",
> > line 177, in wrapper
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > args, kwargs)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File "/usr/local/lib/python3.7/dist-packages/nova/conductor/rpcapi.py",
> > line 243, in object_class_action_versions
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > args=args, kwargs=kwargs)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/rpc/client.py",
> > line 179, in call
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > transport_options=self.transport_options)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/transport.py",
> > line 128, in _send
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > transport_options=transport_options)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 654, in send
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > transport_options=transport_options)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 642, in _send
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > call_monitor_timeout)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 531, in wait
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > message = self.waiters.get(msg_id, timeout=timeout)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > File
> >
> "/usr/local/lib/python3.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 409, in get
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > 'to message ID %s' % msg_id)
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a
> > reply to message ID a493182a404242b79c8f11f8ec350e36
> > 2021-08-19 12:27:00.387 2847961 ERROR oslo_service.periodic_task
> > 2021-08-19 12:27:01.681 2847961 DEBUG oslo_service.periodic_task
> > [req-58a0e182-6e56-4695-b4db-75624ca69c7b - - - - -] Running periodic
> > task ComputeManager._check_instance_build_time run_periodic_tasks
> > /usr/local/lib/python3.7/dist-packages/oslo_service/periodic_task.py:211
> > 2021-08-19 12:27:01.687 2847961 DEBUG oslo_service.periodic_task
> > [req-58a0e182-6e56-4695-b4db-75624ca69c7b - - - - -] Running periodic
> > task ComputeManager._sync_scheduler_instance_info run_periodic_tasks
> > /usr/local/lib/python3.7/dist-packages/oslo_service/periodic_task.py:211
> > 2021-08-19 12:27:02.368 2847961 WARNING oslo.service.loopingcall [-]
> > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
> > outlasted interval by 50.03 sec
> >
> >
> >
> > Error-nov-log-file_while_sch ... e_in _horizaon_dashboard.txt
> > Open with
> > Displaying Error-nov-log-file_while_scheduling_Windows_VM_instance_in
> > _horizaon_dashboard.txt.
> >
> >
> >
> >
> >
> >> If I was to guess it’s probably missing UEFI firmware packages :)
> >>
> >> On Wed, Aug 18, 2021 at 9:17 AM KK CHN <kkchn.in at gmail.com> wrote:
> >>
> >>> Error : failed to perform requested operation on instance "WindowsVM
> "the
> >>> instance has error status. Please try again later [Error exceeded
> maximum
> >>> number of retries. Exhausted all hosts available for retrying build
> >>> failures for instance e3d5c095-7d26-4b1e-89d1-d1a6e20a45041
> >>>
> >>> I am trying to import a WIndows2012 Single disk VM, to OpenStack
> Ussuri,
> >>> with glance and Qemu KVM.
> >>>
> >>> In bare machine KVM I am able to import and boot this Windows VM which
> >>> exported from rhevm hypervisor as vhdx image.
> >>> what I have done is
> >>>
> >>> 1. converted this windows image from vhdx to qcow2
> >>> 2. root at MeghCtrol1:/home/cloud/CMOBB_APP#cirt-install --name WINDOWS
> >>> --ram=1048 --vcups=1 --cpu host --hvm --dick
> >>> path=BackUP2_CMAPP_disk_1_Windows_qcow2_imagefile,device=disk,
> >>> format=qcow2,bus=virtio --graphics vnc --boot uefi
> >>>
> >>> This uploaded the qcow2 image of WindowsVM to KVM hypervisor and its
> >>> working.
> >>>
> >>> But when I do importing to openstack unable to launch instance from
> >>> the image .
> >>>
> >>> These are the steps I performed..
> >>>
> >>> 1. openstack image create "WindowsVM" --file CMAPP_disk_1.qcow2
> >>> --disk-format qcow2 --container-format bare --public
> >>>
> >>> 4.openstack image set --property hw_firmware_type=uefi --property
> >>> os_secure_boot=required WindowsVM
> >>>
> >>> 5.openstack image set --property hw_firmware_type=uefi --property
> >>> hw_disk_bus=ide WindowsVM
> >>>
> >>> 6.openstack image show WindowsVM
> >>>
> >>> 7. root at dmzcloud:/home/cloud# openstack image show WindowsVM|grep
> >>> "properties"
> >>> | properties | hw_disk_bus='ide', hw_firmware_type='uefi',
> >>> os_hash_algo='sha512',
> >>>
> >>>
> os_hash_value='753ee596980409e1e72d6d020c8219c56a6ada8b43f634fb575c594a245725a398e45982c0a1ad72b3fc3451cde62cceb9ff22be044863b31ecdd7893b049349',
> >>> os_hidden='False', os_secure_boot='required',
> >>> owner_specified.openstack.md5='',
> >>> owner_specified.openstack.object='images/WindowsVM',
> >>> owner_specified.openstack.sha256='' |
> >>> root at dmzcloud:/home/cloud#
> >>>
> >>>
> >>> Then I logged into horizon dashboard, from the images selected the
> >>> imported image and try to launch the instance. With a Flavour of 550
> GB
> >>> disk, 4 vcpus and 8GB Ram ..
> >>>
> >>> The instance spawning ran for 30 minutes and throws the error which I
> >>> pasted first in the right top corner of horizon dashboard.
> >>>
> >>> How to solve this error and boot the Windows machine successfully ..
> >>>
> >>>
> >>> """
> >>> Error : failed to perform requested operation on instance "WindowsVM
> "the
> >>> instance has error status. Please try again later [Error exceeded
> maximum
> >>> number of retries. Exhausted all hosts available for retrying build
> >>> failures for instance e3d5c095-7d26-4b1e-89d1-d1a6e20a45041
> >>>
> >>>
> >>> """
> >>>
> >>> Any help highly appreciated.
> >>>
> >>> Kris
> >>>
> >>> --
> >> Mohammed Naser
> >> VEXXHOST, Inc.
> >>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210901/eae009e2/attachment-0001.html>
More information about the openstack-discuss
mailing list