[Openstack] cinder volume weird behavior

Jay S Bryant jsbryant at us.ibm.com
Tue Oct 1 19:07:21 UTC 2013


Ritesh,

I am not familiar with cgroups, so I am not sure if that could have an 
affect.  I would try re-enabling it for good measure, though the initial 
information I get when googling cgroups doesn't seem to be a smoking gun.

I think that the feedback that John Griffith provided mirrors my original 
response that the source of the problem is likely that you already have 
other volumes attached and are looking at the size information for those 
mount points rather than the new volumes you are attaching given that the 
volumes appear to be the right size on the v3700.

Can you provide more details on how you are determining the size of the 
mounted volume?



Jay S. Bryant
Linux Developer - 
    OpenStack Enterprise Edition
                   
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbryant at us.ibm.com
--------------------------------------------------------------------
 All the world's a stage and most of us are desperately unrehearsed.
                   -- Sean O'Casey
--------------------------------------------------------------------



From:   Ritesh Nanda <riteshnanda09 at gmail.com>
To:     Jay S Bryant/Rochester/IBM at IBMUS, 
Date:   09/30/2013 02:28 PM
Subject:        Re: [Openstack] cinder volume weird behavior



Hello Jay , 

I use ubuntu 12.04 for controller and compute nodes , one change i did  
which i recall is stopping  the cgroups so that it gives performance to 
the libvirt.
Can it be the issue , rest all packages are the latest version, i checked 
all. 

 is cgroups related something to iscsi + libvirt?


On Tue, Oct 1, 2013 at 12:12 AM, Jay S Bryant <jsbryant at us.ibm.com> wrote:
Ritesh, 

What are you running for the host operating system on the Control Node and 
Compute Nodes?   

As I Google around for the errors you are seeing it seems that there have 
been some changes to libvirt and qemu that might help to avoid the problem 
you are seeing. 

Have you tried updating libvirt and qemu?



Jay S. Bryant
Linux Developer - 
   OpenStack Enterprise Edition
                  
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbryant at us.ibm.com
--------------------------------------------------------------------
All the world's a stage and most of us are desperately unrehearsed.
                  -- Sean O'Casey
-------------------------------------------------------------------- 



From:        Ritesh Nanda <riteshnanda09 at gmail.com> 
To:        Jay S Bryant/Rochester/IBM at IBMUS, 
Date:        09/30/2013 01:04 PM 
Subject:        Re: [Openstack] cinder volume weird behavior 



Hello Jay,

Bug there for hyperV does'nt look related to my issue , as i have been 
specifying mount point like /dev/vdb , /dev/vdc, not using the auto 
functionality ,  When i manually try to attach a created volume from the 
horizon on the compute node on which the vm resides nova-compute log shows 
the error.

libvirtError: internal error unable to execute QEMU command 'device_add': 
Duplicate ID 'virtio-disk2' for device

Given something random mount point while attaching to a vm solves like 
specifying /dev/vfg attaches the volume but the below error persists. 

Other problem which accompanies with this is volumes i create for e.g 4gb 
gets correctly created on v3700 , but when i attach to a vm , in vm 
console it shows always a different size. 
  

Giving a restart to open-iscsi on the cinder-volume node solves the issue 
for few minutes , then it comes back.

I have a multi-node grizzly with multi-host nova-network (vlan manager) 
running. 
Cinder api and scheduler, volume  running on the controller and 
cinder-volume on all the compute nodes.

  


On Mon, Sep 30, 2013 at 11:07 PM, Jay S Bryant <jsbryant at us.ibm.com> 
wrote: 
Ritesh, 

I have noticed that the 'auto' option for the mount point is not working 
as expected right now.  There is an issue opened that seems similar:  
https://bugs.launchpad.net/nova/+bug/1153842  The bug spells out HyperV 
but I believe the same is happening for KVM.  As you have noted, it is 
pretty easy to work around.  I hope to look at the issue after I get some 
other bugs fixed. 

With regards to the size problem, what size the volume appear to be on the 
3700?  Does the size of the volume appear as expected there?  Is it 
associated with the expected host?  Are you sure you are looking at the 
right device from the compute node?  



Jay S. Bryant
Linux Developer - 
   OpenStack Enterprise Edition
                  
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbryant at us.ibm.com
--------------------------------------------------------------------
All the world's a stage and most of us are desperately unrehearsed.
                  -- Sean O'Casey
-------------------------------------------------------------------- 



From:        Ritesh Nanda <riteshnanda09 at gmail.com> 
To:        "openstack at lists.openstack.org" <openstack at lists.openstack.org
>, 
Date:        09/30/2013 05:37 AM 
Subject:        [Openstack] cinder volume weird behavior 




Hello ,

I have grizzly setup , in which i run cinder using IBM storvize 3700. 
Cinder shows a weird behavior every time i create a volume of some size 
and attach it to a vm , it shows some different size .

e.g i create a 4gb volume and attach it to a vm it shows of 15gb , this is 
every-time different sometimes it shows a volume smaller of the size it 
created.

While attaching a volume to a vm sometimes i get error on compute-nodes 
stating 


d9f36a440abdf2fdd] [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] Failed 
to attach volume 676ef5b1-129b-4d42-b38d-df2005a3d634 at /dev/vdc
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b] Traceback (most recent call last):
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2878, in 
_attach_volume
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]     mountpoint)
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 981, 
in attach_volume
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]     disk_dev)
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]     self.gen.next()
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 968, 
in attach_volume
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]     
virt_dom.attachDeviceFlags(conf.to_xml(), flags)
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]     result = 
proxy_call(self._autowrap, f, *args, **kwargs)
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 147, in 
proxy_call
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]     rv = execute(f,*args,**kwargs)
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]     rv = meth(*args,**kwargs)
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]   File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 422, in 
attachDeviceFlags
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b]     if ret == -1: raise libvirtError 
('virDomainAttachDeviceFlags() failed', dom=self)
2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: 
b9f128a9-d3e3-42a1-9511-74868b625b1b] libvirtError: internal error unable 
to execute QEMU command 'device_add': Duplicate ID 'virtio-disk2' for 
device

Then if i change the mount point from /dev/vdc to something random mount 
point , it attaches the disk. But still showing different sizes problem 
remains.

Restarting open-iscsi services and reattaching  the volume to the vm 
solves the issue.

Attaching my cinder.conf


Has anyone encountered this problem , or any help would be really 
appreciated.





-- 
 With Regards  
 Ritesh Nanda 


[attachment "cinder.conf" deleted by Jay S Bryant/Rochester/IBM] 
_______________________________________________
Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org
Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



-- 
 With Regards  
 Ritesh Nanda 






-- 
 With Regards  
 Ritesh Nanda




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131001/041d1ccc/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 4229 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131001/041d1ccc/attachment.jpe>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 4229 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131001/041d1ccc/attachment-0001.jpe>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 4229 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131001/041d1ccc/attachment-0002.jpe>


More information about the Openstack mailing list