<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <p>Hi Andy,</p>
    <p>Yes it probably does not matter if the variables are visible in
      other containers (you get a working system up and running anyway)
      but it is not the expected behavior when using
      limit_container_types. I did manage to get the variables limited
      to cinder_volume containers when I typed the configuration like
      this:<br>
    </p>
    <p>  XxX:<br>
          ip: 172.22.5.9<br>
          container_vars:<br>
            cinder_storage_availability_zone: cinderAZ_1<br>
            cinder_default_availability_zone: cinderAZ_1<br>
            cinder_backends:<br>
              limit_container_types: cinder_volume<br>
              cinder_nfs:<br>
                volume_backend_name: cinder_nfs<br>
                volume_driver: cinder.volume.drivers.nfs.NfsDriver<br>
                nfs_mount_options:
"_netdev,auto,rw,intr,noatime,async,vers=3,proto=tcp,wsize=1048576,rsize=1048576,timeo=1200,actimeo=120"<br>
            cinder_nfs_client:<br>
              limit_container_types: cinder_volume<br>
              nfs_shares_config: /etc/cinder/nfs_shares<br>
              shares:<br>
                - ip: "172.22.20.254"<br>
                  share: "/nfs/cinder/production"<br>
    </p>
    <p><br>
    </p>
    <p>However, I did not find any way to limit the glance nfs
      configuration to only the glance containers. It looks like this in
      openstack_user_config.yml.prod.example:</p>
    <p>image_hosts:<br>
        infra1:<br>
          ip: 172.29.236.11<br>
          container_vars:<br>
            limit_container_types: glance<br>
            glance_nfs_client:<br>
              - server: "172.29.244.15"<br>
                remote_path: "/images"<br>
                local_path: "/var/lib/glance/images"<br>
                type: "nfs"<br>
                options: "_netdev,auto"<br>
      <br>
    </p>
    My workaround to the problem was to add the glance configuration to
    user_variables.yml instead, that way you don't get the inventory
    full with glance configuration on all containers. This is what I
    added to user_variables.yml:<br>
    <br>
    glance_nfs_client:<br>
      - server: "172.22.20.254"<br>
        remote_path: "/nfs/glance/production"<br>
        local_path: "/var/lib/glance/images"<br>
        type: "nfs"<br>
        options:
"_netdev,auto,rw,intr,noatime,async,vers=3,proto=tcp,wsize=1048576,rsize=1048576,timeo=1200,actimeo=120"<br>
    <br>
    <br>
    So it sounds like you are aware of the problem and are working on a
    fix, that is great.<br>
    Thank you all for a very good work with openstack-ansible.<br>
    <br>
    Regards,<br>
    Andreas<br>
    <br>
    <br>
    <br>
    <div class="moz-cite-prefix">On 02/03/2017 05:27 PM, Andy McCrae
      wrote:<br>
    </div>
    <blockquote
cite="mid:CAM2OCdMe37O3+ZxJCMp7fsT9CUcTAV+NkB_GG5f3d9hA0tC-Zg@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <div dir="ltr">Hi Andreas,
        <div><br>
        </div>
        <div>The way you're doing it at the end looks correct - the docs
          are not quite right on that one.</div>
        <div>The nfs_shares file will only get templated on the
          cinder_volumes hosts, as will the nfs_shares_config option  -
          so in essence it shouldn't matter that the var isn't limited
          to volume hosts.<br>
        </div>
        <div><br>
        </div>
        <div>That said, there is a bug here that we've been working to
          fix - this will work if you have one backend. </div>
        <div>Mainly it means there is no support for multiple backends,
          if one of them is an NFS backend, and there is no support for
          multiple NFS backends.</div>
        <div>A patch has gone in to allow multiple NFS backends (and
          change the way they're configured) - but it's not quite ready
          for release - so we're working on a subsequent fix that will
          then be a part of newton 14.0.8 release for OSA.</div>
        <div><br>
        </div>
        <div>I'd be happy to keep you updated - if you'd like to drop
          into #openstack-ansible on Freednode IRC there are a lot of
          active operators/developers/deployers, we'd be happy to help
          further.<br>
        </div>
        <div><br>
        </div>
        <div>But barring that I'll try add a reminder to drop an update
          here once that's working.</div>
        <div><br>
        </div>
        <div>Andy</div>
        <div><br>
        </div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On 3 February 2017 at 08:56, Andreas
          Vallin <span dir="ltr"><<a moz-do-not-send="true"
              href="mailto:andreas.vallin@it.uu.se" target="_blank">andreas.vallin@it.uu.se</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi!<br>
            <br>
            I need to ask how do you correctly configure nfs to be used
            with openstack-ansible newton (14.0.6). I think it is great
            that there is an production example file that uses nfs for
            glance and cinder (<a moz-do-not-send="true"
              href="http://openstack_user_config.yml.pro">openstack_user_config.yml.pro</a><wbr>d.example)
            but the cinder config is not working for me.<br>
            <br>
            I use this config for storage_hosts, only changing ip, and
            share from production example:<br>
            <br>
            storage_hosts:<br>
              XxXx:<br>
                ip: 172.22.5.9<br>
                container_vars:<br>
                  cinder_backends:<br>
                    limit_container_types: cinder_volume<br>
                    cinder_nfs_client:<br>
                      nfs_shares_config: /etc/cinder/nfs_shares<br>
                      shares:<br>
                        - ip: "172.22.20.254"<br>
                          share: "/nfs/cinder/production"<br>
            <br>
            And this is the failure when running os-cinder-install.yml<br>
            <br>
            TASK [os_cinder : Add in cinder devices types]
            ******************************<wbr>***<br>
            fatal: [XxXx_cinder_volumes_container<wbr>-080139bd]:
            FAILED! => {"failed": true, "msg": "the field 'args' has
            an invalid value, which appears to include a variable that
            is undefined. The error was: 'dict object' has no attribute
            'volume_backend_name'\n\nThe error appears to have been in
            '/etc/ansible/roles/os_cinder/<wbr>tasks/cinder_backends.yml':
            line 30, column 3, but may\nbe elsewhere in the file
            depending on the exact syntax problem.\n\nThe offending line
            appears to be:\n\n\n- name: Add in cinder devices types\n  ^
            here\n"}<br>
            <br>
            OK, so volume_backend_name is missing. The playbook runs if
            I add volume_backend_name to the config like this:<br>
            <br>
            <br>
            storage_hosts:<br>
              XxXx:<br>
                ip: 172.22.5.9<br>
                container_vars:<br>
                  cinder_backends:<br>
                    limit_container_types: cinder_volume<br>
                    cinder_nfs_client:<br>
                      volume_backend_name: cinder_nfs<br>
                      nfs_shares_config: /etc/cinder/nfs_shares<br>
                      shares:<br>
                        - ip: "172.22.20.254"<br>
                          share: "/nfs/cinder/production"<br>
            <br>
            <br>
            But now there is no /etc/cinder/nfs_shares file in the
            cinder-volumes-container so the nfs share will not be
            mounted. This is because the "Create nfs shares export file"
            task in cinder_post_install.yml doesn't see that
            cinder_nfs_client is defined. You also get this in
            cinder.conf:<br>
            <br>
            enabled_backends=cinder_nfs_cl<wbr>ient<br>
            # All given backend(s)<br>
            [cinder_nfs_client]<br>
            volume_backend_name=cinder_nfs<br>
            nfs_shares_config=/etc/cinder/<wbr>nfs_shares<br>
            shares=[{u'ip': u'172.22.20.254', u'share':
            u'/nfs/cinder/production'}]<br>
            <br>
            <br>
            This configuration works for me:<br>
            <br>
            storage_hosts:<br>
              XxXx:<br>
                ip: 172.22.5.9<br>
                container_vars:<br>
                  cinder_storage_availability_zo<wbr>ne: cinderAZ_1<br>
                  cinder_default_availability_zo<wbr>ne: cinderAZ_1<br>
                  limit_container_types: cinder_volume<br>
                  cinder_backends:<br>
                    cinder_nfs:<br>
                      volume_backend_name: cinder_nfs<br>
                      volume_driver: cinder.volume.drivers.nfs.NfsD<wbr>river<br>
                      nfs_mount_options: "_netdev,auto,rw,intr,noatime,<wbr>async,vers=3,proto=tcp,wsize=1<wbr>048576,rsize=1048576,timeo=120<wbr>0,actimeo=120"<br>
                  cinder_nfs_client:<br>
                    nfs_shares_config: /etc/cinder/nfs_shares<br>
                    shares:<br>
                      - ip: "172.22.20.254"<br>
                        share: "/nfs/cinder/production"<br>
            <br>
            <br>
            BUT when I look in the inventory file
            (openstack_inventory.json) it doesn't look like this
            configuration is limited to cinder_volume containers even if
            "limit_container_types: cinder_volume" is used. So now I
            feel it is time to ask how a correct configuration should
            look like.<br>
            <br>
            Regards,<br>
            Andreas<br>
            <br>
            <br>
            ______________________________<wbr>_________________<br>
            OpenStack-operators mailing list<br>
            <a moz-do-not-send="true"
              href="mailto:OpenStack-operators@lists.openstack.org"
              target="_blank">OpenStack-operators@lists.open<wbr>stack.org</a><br>
            <a moz-do-not-send="true"
href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators"
              rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-operators</a><br>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
  </body>
</html>