<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 12 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"\6279\6CE8\6846\6587\672C Char";
margin:0cm;
margin-bottom:.0001pt;
font-size:9.0pt;
font-family:"Calibri","sans-serif";}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin:0cm;
margin-bottom:.0001pt;
text-indent:21.0pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
span.EmailStyle17
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
span.Char
{mso-style-name:"\6279\6CE8\6846\6587\672C Char";
mso-style-priority:99;
mso-style-link:\6279\6CE8\6846\6587\672C;
font-family:"Calibri","sans-serif";}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:2132622669;
mso-list-type:hybrid;
mso-list-template-ids:907287254 -1345010758 67698713 67698715 67698703 67698713 67698715 67698703 67698713 67698715;}
@list l0:level1
{mso-level-text:"%1\)";
mso-level-tab-stop:none;
mso-level-number-position:left;
margin-left:18.0pt;
text-indent:-18.0pt;
mso-ansi-font-size:10.5pt;
font-family:"Times New Roman","serif";
mso-ascii-font-family:Calibri;
mso-hansi-font-family:Calibri;
mso-bidi-font-family:"Times New Roman";
color:#1F497D;}
ol
{margin-bottom:0cm;}
ul
{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="ZH-CN" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D">Hi,all<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D">David said:</span><span lang="EN-US">
</span><span lang="EN-CA">users will simply try to get rid of their volumes ALL at the same time and this is putting a lot of pressure on the SAN servicing those volumes and since the hardware isn’t replying fast enough, the process then fall in D state and
are waiting for IOs to complete which slows down everything</span><span lang="EN-CA">.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA"></span><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D">The system must tolerate this kind of behavior. The status of process “dd” will fall in D state with the pressure of SAN.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D">In my opinion, we should rethink the way of wiping the data in the volumes. Filling in the device with /dev/zero with “dd” command was the most primitive method. The standard scsi
command “write same” could be taken into considered.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D">Once the LBA was provided and the command was sent to the SAN , the storage device(SAN) could write the same-data into the LUN or volumes. The “dd” operation can be offloaded to
the storage array to execute.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D">Thanks,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D">Qi<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D">Reference:<o:p></o:p></span></p>
<p class="MsoListParagraph" style="margin-left:18.0pt;text-indent:-18.0pt;mso-list:l0 level1 lfo1">
<![if !supportLists]><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><span style="mso-list:Ignore">1)<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-US"><a href="http://manpages.ubuntu.com/manpages/karmic/man8/sg_write_same.8.html">http://manpages.ubuntu.com/manpages/karmic/man8/sg_write_same.8.html</a></span><span lang="EN-US"><o:p></o:p></span></p>
<p class="MsoListParagraph" style="margin-left:18.0pt;text-indent:-18.0pt;mso-list:l0 level1 lfo1">
<![if !supportLists]><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><span style="mso-list:Ignore">2)<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-US"><a href="http://storagegaga.wordpress.com/2012/01/06/why-vaai/">http://storagegaga.wordpress.com/2012/01/06/why-vaai/</a></span><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p> </o:p></span></p>
<div>
<div class="MsoNormal" align="center" style="text-align:center;text-autospace:none">
<span lang="EN-US" style="font-size:12.0pt;font-family:SimSun;color:#1F497D">
<hr size="2" width="100%" align="center">
</span></div>
<p class="MsoNormal" style="text-autospace:none"><b><span lang="EN-US" style="font-size:12.0pt;font-family:"Arial","sans-serif";color:black">Qi Xiaozhen
<o:p></o:p></span></b></p>
<p class="MsoNormal" style="margin-top:12.0pt;text-autospace:none"><b><span lang="EN-US" style="font-size:10.0pt;font-family:"Arial","sans-serif";color:black">CLOUD OS PDU, IT Product Line, Huawei Enterprise Business Group
</span></b><span lang="EN-US" style="font-size:10.0pt;font-family:"Arial","sans-serif";color:#5F5F5F"><br>
Mobile: +86 13609283376 Tel: +86 29-89191578 <br>
Email: <a href="mailto:qixiaozhen@huawei.com">qixiaozhen@huawei.com </a><br>
<br>
</span><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p></o:p></span></p>
</div>
<p class="MsoNormal"><span lang="EN-US" style="font-size:10.5pt;color:#1F497D"><o:p> </o:p></span></p>
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal"><b><span lang="EN-US" style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">From:</span></b><span lang="EN-US" style="font-size:10.0pt;font-family:"Tahoma","sans-serif""> David Hill [mailto:david.hill@ubisoft.com]
<br>
<b>Sent:</b> Saturday, November 02, 2013 6:21 AM<br>
<b>To:</b> openstack@lists.openstack.org<br>
<b>Subject:</b> [Openstack] Wiping of old cinder volumes<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">Hi guys,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA"> </span><span lang="EN-CA">I was wondering there was some better way of wiping the content of an old EBS volume before actually deleting the logical volume in cinder ? Or perhaps, configure or add the
possibility to configure the number of parallel “dd” processes that will be spawn at the same time…<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">Sometimes, users will simply try to get rid of their volumes ALL at the same time and this is putting a lot of pressure on the SAN servicing those volumes and since the hardware isn’t replying fast enough, the process
then fall in D state and are waiting for IOs to complete which slows down everything.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">Since this process isn’t (in my opinion) as critical as a EBS write or read, perhaps we should be able to throttle the speed of disk wiping or number of parallel wipings to something that wouldn’t affect the other read/write
that are most probably more critical.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">Here is a small capture of the processes :<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">cinder 23782 0.7 0.2 248868 20588 ? S Oct24 94:23 /usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">cinder 23790 0.0 0.5 382264 46864 ? S Oct24 9:16 \_ /usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 32672 0.0 0.0 175364 2648 ? S 21:48 0:00 | \_ sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d791 count=102400
bs=1M co<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 32675 0.0 0.1 173636 8672 ? S 21:48 0:00 | | \_ /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d7<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 32681 3.2 0.0 106208 1728 ? D 21:48 0:47 | | \_ /bin/dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d791 count=102400 bs=1M conv=fdatasync<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 32674 0.0 0.0 175364 2656 ? S 21:48 0:00 | \_ sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dcdf count=102400
bs=1M co<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 32676 0.0 0.1 173636 8672 ? S 21:48 0:00 | | \_ /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dc<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 32683 3.2 0.0 106208 1724 ? D 21:48 0:47 | | \_ /bin/dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dcdf count=102400 bs=1M conv=fdatasync<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 32693 0.0 0.0 175364 2656 ? S 21:48 0:00 | \_ sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6cd count=102400
bs=1M co<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 32694 0.0 0.1 173632 8668 ? S 21:48 0:00 | | \_ /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 32707 3.2 0.0 106208 1728 ? D 21:48 0:46 | | \_ /bin/dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6cd count=102400 bs=1M conv=fdatasync<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 342 0.0 0.0 175364 2648 ? S 21:48 0:00 | \_ sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--45251e8e--0c54--4e8f--9446--4e92801976ab count=102400
bs=1M co<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 343 0.0 0.1 173636 8672 ? S 21:48 0:00 | | \_ /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--45251e8e--0c54--4e8f--9446--4e92801976<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 347 3.2 0.0 106208 1728 ? D 21:48 0:45 | | \_ /bin/dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--45251e8e--0c54--4e8f--9446--4e92801976ab count=102400 bs=1M conv=fdatasync<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 380 0.0 0.0 175364 2656 ? S 21:48 0:00 | \_ sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--1d9dfb31--dc06--43d5--bc1f--93b6623ff8c4 count=102400
bs=1M co<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 382 0.0 0.1 173632 8668 ? S 21:48 0:00 | | \_ /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--1d9dfb31--dc06--43d5--bc1f--93b6623ff8<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 388 3.2 0.0 106208 1724 ? R 21:48 0:45 | | \_ /bin/dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--1d9dfb31--dc06--43d5--bc1f--93b6623ff8c4 count=102400 bs=1M conv=fdatasync<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 381 0.0 0.0 175364 2648 ? S 21:48 0:00 | \_ sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--60971d47--d3c5--44ef--9d43--d461c364d148 count=102400
bs=1M co<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 384 0.0 0.1 173636 8672 ? S 21:48 0:00 | | \_ /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--60971d47--d3c5--44ef--9d43--d461c364d1<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 391 3.2 0.0 106208 1728 ? D 21:48 0:45 | | \_ /bin/dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--60971d47--d3c5--44ef--9d43--d461c364d148 count=102400 bs=1M conv=fdatasync<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 383 0.0 0.0 175364 2648 ? S 21:48 0:00 | \_ sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--856080db--4f8c--4063--9c47--69acb8460e50 count=102400
bs=1M co<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 386 0.0 0.1 173632 8668 ? S 21:48 0:00 | | \_ /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--856080db--4f8c--4063--9c47--69acb8460e<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 389 3.1 0.0 106208 1724 ? D 21:48 0:45 | | \_ /bin/dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--856080db--4f8c--4063--9c47--69acb8460e50 count=102400 bs=1M conv=fdatasync<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 385 0.0 0.0 175364 2652 ? S 21:48 0:00 | \_ sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--f8f98d80--044f--4d4a--983f--d1186556f886 count=102400
bs=1M co<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 387 0.0 0.1 173632 8668 ? S 21:48 0:00 | | \_ /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--f8f98d80--044f--4d4a--983f--d1186556f8<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 392 3.1 0.0 106208 1728 ? D 21:48 0:45 | | \_ /bin/dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--f8f98d80--044f--4d4a--983f--d1186556f886 count=102400 bs=1M conv=fdatasync<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 413 0.0 0.0 175364 2652 ? S 21:48 0:00 | \_ sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--0e89696a--492b--494c--81fa--7e834b9f31f4 count=102400
bs=1M co<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 414 0.0 0.1 173636 8672 ? S 21:48 0:00 | \_ /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--0e89696a--492b--494c--81fa--7e834b9f31<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">root 420 3.1 0.0 106208 1728 ? D 21:48 0:45 | \_ /bin/dd if=/dev/zero of=/dev/mapper/cinder--volumes-volume--0e89696a--492b--494c--81fa--7e834b9f31f4 count=102400 bs=1M conv=fdatasync<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">cinder 23791 0.0 0.5 377464 41968 ? S Oct24 7:46 \_ /usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">iostat output :<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">dm-23 0.00 0.00 0.00 18408.00 0.00 71.91 8.00 503.06 28.83 0.05 100.00<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">dm-25 0.00 0.00 0.00 20544.00 0.00 80.25 8.00 597.24 30.56 0.05 100.10<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">dm-29 0.00 0.00 0.00 19232.00 0.00 75.12 8.00 531.80 27.62 0.05 100.10<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">dm-34 0.00 0.00 0.00 20128.00 0.00 78.62 8.00 498.10 24.92 0.05 100.00<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">dm-39 0.00 0.00 0.00 18355.00 0.00 71.70 8.00 534.77 28.98 0.05 100.00<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">dm-59 0.00 0.00 0.00 18387.00 0.00 71.82 8.00 587.79 32.10 0.05 100.00<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">dm-96 0.00 0.00 0.00 16480.00 0.00 64.38 8.00 467.96 27.51 0.06 100.00<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">dm-97 0.00 0.00 0.00 17024.00 0.00 66.50 8.00 502.25 29.21 0.06 100.00<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA">dm-98 0.00 0.00 0.00 20704.00 0.00 80.88 8.00 655.67 31.37 0.05 100.00<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="FR-CA"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">parent dm :<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">dm-0 142.00 74394.00 100.00 2812.00 1.00 302.41 213.38 156.74 52.84 0.34 100.00<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">Thank you very much ,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA">Dave<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA"><o:p> </o:p></span></p>
</div>
</body>
</html>