[Openstack] Ocata Update libvirtd Error

Georgios Dimitrakakis giorgis at acmac.uoc.gr
Sun Oct 15 19:00:12 UTC 2017


 For future reference if anybody else comes into this besides 
 "hw_disk_bus='scsi'" property I had to add "hw_scsi_model=virtio-scsi" 
 property, which makes sense but was working without it so far.

 Best,

 G.

>> Hello,
>>
>> I think I have identified what the issue is. The problem is with 
>> images that
>> have the property "hw_disk_bus='scsi'" enabled and are trying to 
>> boot or
>> launch either with ephemeral disk or swap or both of them together.
>>
>> In order to reproduce the problem one can do the following.
>>
>> Download the cirros image and upload it to Glance two times (let's 
>> assume
>> cirros-SCSI and cirros-VD)
>> Add the "hw_disk_bus='scsi'" property to the cirros-SCSI image.
>>
>> Have a few flavors with the same CPU, RAM and root disk options and 
>> change
>> only ephemeral and swap or both options.
>>
>>
>> Try to launch instances using cirros-VD with any combinations of 
>> ephemeral
>> or swap or both ---> No problem at all.
>>
>> Try to launch instances using cirros-SCSI ---> All combinations that 
>> have
>> either ephemeral disk or swap disk or both of them at the same time 
>> produce
>> the error: "libvirtError: unsupported configuration: Found duplicate 
>> drive
>> address for disk with target name 'sda' controller='0' bus='0' 
>> target='0'
>> unit='0'"
>>
>> The only instance that can be launched successfully when having the
>> "hw_disk_bus='scsi'" property enabled on the image is one with a 
>> flavor
>> without ephemeral disk and without swap.
>>
>> Has anyone else encountered this? Could it be considered a bug? Any 
>> ideas on
>> how to solve it are mostly welcome.
>>
>> Best regards,
>>
>> G.
>>
>
> Please check out this bug: 
> https://bugs.launchpad.net/nova/+bug/1686116
>
>>
>>
>>> Hi David,
>>>
>>> thx for the info provided.
>>>
>>> I understand what "reset-state" does that's why I 've already tried
>>> hard reboot but unfortunately it only brought back the instance at 
>>> the
>>> error state.
>>>
>>> What worries me more is that in the "/etc/libvirt/qemu" there are 
>>> no
>>> XML files for the erroneous instances, that's why the snapshot 
>>> fails.
>>>
>>> Any ideas?
>>>
>>> These all appeared as soon as I 've updated to the latest Ocata 
>>> version.
>>>
>>> Best,
>>>
>>> G.
>>>
>>>
>>>> Hi G.,
>>>>
>>>> I dont have Ocata up anywhere, but as a "best practice", I 
>>>> generally
>>>> do the:
>>>> nova reset-state --active $UUID
>>>> followed immediately by
>>>> nova reboot --hard
>>>>
>>>> to try and "restore/resurrect" errored instances. The reset-state
>>>> --active doesnt actually do anything to the instance, it just
>>>> manipulates the nova db. The reboot --hard does a fairly clean
>>>> "reboot" of the instance even if it is off. None of this should 
>>>> really
>>>> have anything to do with CEPH as per se, so not sure it will have 
>>>> the
>>>> desired outcome on your cloud though.
>>>>
>>>> On Sun, Oct 8, 2017 at 8:46 AM, Georgios Dimitrakakis  wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I ve tried today to update my OpenStack installation to the 
>>>>> latest
>>>>> Ocata version.
>>>>>
>>>>> What I did was to shutoff all running instances, perform all
>>>>> updates and then rebooted controller and compute nodes.
>>>>>
>>>>> All seemed to have finished successfully but unfortunately when I
>>>>> tried to power on instances that had attached to them volumes 
>>>>> (which
>>>>> are provided by Ceph) I got the following error:
>>>>>
>>>>> libvirtError: unsupported configuration: Found duplicate drive
>>>>> address for disk with target name sda controller=0 bus=0 target=0
>>>>> unit=0
>>>>>
>>>>> and the instance status now is "Error" and "No State" as a Power
>>>>> state.
>>>>>
>>>>> This happened only to the instances that already had volumes
>>>>> attached to them. All the rest instances booted up normally.
>>>>>
>>>>> I have tried to reset the states of the problematic instances
>>>>> using: nova reset-state --active $instance_id and then take a
>>>>> snapshot of them so that I can delete and recreate them.
>>>>>
>>>>> Unfortunately although the state update was successful the 
>>>>> snapshot
>>>>> couldnt be taken because of this :
>>>>>
>>>>> InstanceNotRunning: Instance $instance_id is not running.
>>>>>
>>>>> Any ides of what can I do in order to start my instances again? 
>>>>> Any
>>>>> bug which is related?
>>>>>
>>>>> Best regards,
>>>>>
>>>>> G.
>>>>>
>>>>> _______________________________________________
>>>>> Mailing list:
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
>>>>> Post to     : openstack at lists.openstack.org [2]
>>>>> Unsubscribe :
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [3]
>>>>
>>>>
>>>>
>>>>
>>>> Links:
>>>> ------
>>>> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> [2] mailto:openstack at lists.openstack.org
>>>> [3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> [4] mailto:giorgis at acmac.uoc.gr
>>>
>>>
>>>
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>> --
>>
>> _______________________________________________
>> Mailing list: 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




More information about the Openstack mailing list