<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">One major disadvantage is lack of
      multipath support.<br>
      <br>
      Multipath is still done outside of qemu and there is no native
      multipath support inside of qemu from what I can tell.  Another<br>
      disadvantage is that qemu iSCSI support is all s/w based. There
      are hardware iSCSI initiators that are supported by os-brick
      today.  I think migrating attaches into qemu itself isn't a good
      idea and will always be behind the level of support already
      provided by the tools that have been around forever.  Also, what
      kind of support does QEMU have for target portal discovery?  Can
      it discover all targets via a single portal, and can you pass in
      multiple portals to do discovery for the same volume?  This is
      also related to multipath support.  Some storage arrays can't do
      discovery on a single portal, they have to have discovery on each
      interface.<br>
      <br>
      Do you have some actual numbers to prove that host based attaches
      passed into libvirt are slower than QEMU direct attaches?<br>
      <br>
      You can't really compare RBD to iSCSI.  RBD is a completely
      different beast.  The kernel rbd driver hasn't been as stable and
      as fast as the rbdclient that qemu uses.<br>
      <br>
      Walt<br>
      <br>
      <br>
      On 06/15/2016 04:59 PM, Preston L. Bannister wrote:<br>
    </div>
    <blockquote
cite="mid:CA+R0Njb8wztW5cJrMf9BN6y=QbUqh6gMaihQ0PHL2_1x9QHt-Q@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=windows-1252">
      <div dir="ltr">
        <div>QEMU has the ability to directly connect to iSCSI volumes.
          Running the iSCSI connections through the nova-compute host
          *seems* somewhat inefficient. </div>
        <div><br>
        </div>
        <div>There is a spec/blueprint and implementation that landed in
          Kilo:</div>
        <div><br>
        </div>
        <div><a moz-do-not-send="true"
href="https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html">https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html</a><br>
        </div>
        <div><a moz-do-not-send="true"
href="https://blueprints.launchpad.net/nova/+spec/qemu-built-in-iscsi-initiator">https://blueprints.launchpad.net/nova/+spec/qemu-built-in-iscsi-initiator</a><br>
        </div>
        <div><br>
        </div>
        <div>From looking at the OpenStack Nova sources ... I am not
          entirely clear on when this behavior is invoked (just for
          Ceph?), and how it might change in future. </div>
        <div><br>
        </div>
        <div>Looking for a general sense where this is headed. (If
          anyone knows...)</div>
        <div><br>
        </div>
        <div>If there is some problem with QEMU and directly attached
          iSCSI volumes, that would explain why this is not the default.
          Or is this simple inertia?</div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>I have a concrete concern. I work for a company (EMC) that
          offers backup products, and we now have backup for instances
          in OpenStack. To make this efficient, we need to collect
          changed-block information from instances. </div>
        <div><br>
        </div>
        <div>1)  We could put an intercept in the Linux kernel of the
          nova-compute host to track writes at the block layer. This has
          the merit of working for containers, and potentially
          bare-metal instance deployments. But is not guaranteed for
          instances, if the iSCSI volumes are directly attached to QEMU.</div>
        <div><br>
        </div>
        <div>2)  We could use the QEMU support for incremental backup
          (first bit landed in QEMU 2.4). This has the merit of working
          with any storage, by only for virtual machines under QEMU.</div>
        <div><br>
        </div>
        <div>As our customers are (so far) only asking about virtual
          machine backup. I long ago settled on (2) as most promising. </div>
        <div><br>
        </div>
        <div>What I cannot clearly determine is where (1) will fail.
          Will all iSCSI volumes connected to QEMU instances eventually
          become directly connected? </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>Xiao's unanswered query (below) presents another question.
          Is this a site-choice? Could I require my customers to
          configure their OpenStack clouds to always route iSCSI
          connections through the nova-compute host? (I am not a fan of
          this approach, but I have to ask.)</div>
        <div><br>
        </div>
        <div>To answer Xiao's question, can a site configure their cloud
          to *always* directly connect iSCSI volumes to QEMU?</div>
        <div><br>
        </div>
        <br>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Tue, Feb 16, 2016 at 4:54 AM, Xiao
            Ma (xima2) <span dir="ltr"><<a moz-do-not-send="true"
                href="mailto:xima2@cisco.com" target="_blank">xima2@cisco.com</a>></span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
              <div style="word-wrap:break-word">
                <div>
                  <div
style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
                    <div>
                      <div>Hi, All</div>
                      <div><br>
                      </div>
                      <div>I want to make the qemu communicate with
                        iscsi target using libiscsi directly, and I </div>
                      <div>followed <a moz-do-not-send="true"
                          href="https://review.openstack.org/#/c/135854/"
                          target="_blank">https://review.openstack.org/#/c/135854/</a> to
                        add </div>
                      <div><span
                          style="background-color:rgb(255,255,255)">'</span><span
                          style="background-color:rgb(255,255,255)"><font
                            face="monospace" size="2"><span style="white-space:pre-wrap">volume_drivers
 = iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’</span></font> in </span>nova.conf</div>
                      <div> and then restarted nova services and cinder
                        services, but still the volume configuration of
                        vm is as bellow:</div>
                      <div><br>
                      </div>
                      <div>
                        <div>    <disk type='block' device='disk'></div>
                        <div>      <driver name='qemu' type='raw'
                          cache='none'/></div>
                        <div>      <source
dev='/dev/disk/by-path/ip-10.75.195.205:3260-iscsi-iqn.2010-10.org.openstack:volume-076bb429-67fd-4c0c-9ddf-0dc7621a975a-lun-0'/></div>
                        <div>      <target dev='vdb'
                          bus='virtio'/></div>
                        <div>     
                          <serial>076bb429-67fd-4c0c-9ddf-0dc7621a975a</serial></div>
                        <div>      <address type='pci'
                          domain='0x0000' bus='0x00' slot='0x06'
                          function='0x0'/></div>
                        <div>    </disk></div>
                        <div><br>
                        </div>
                        <div><br>
                        </div>
                        <div>I use centos7 and Liberty version of
                          OpenStack.</div>
                        <div>Could anybody tell me how can I achieve it?</div>
                        <div><br>
                        </div>
                        <div><br>
                        </div>
                      </div>
                      <div>Thanks.</div>
                    </div>
                  </div>
                </div>
              </div>
            </blockquote>
          </div>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: <a class="moz-txt-link-abbreviated" href="mailto:OpenStack-dev-request@lists.openstack.org?subject:unsubscribe">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>