[OpenStack-DefCore] [interop-challenge] Workload Results

Cazares, Luz luz.cazares at intel.com
Mon Oct 3 22:37:07 UTC 2016


1.) Your name:  Luz Cazares
2.) Your email: luz.cazares at intel.com
3.) Reporting on behalf of Company/Organization:   Intel and Rackspace (OSIC).
4.) Name and version (if applicable) of the product you tested: OSIC Cluster – Cloud1
5.) Version of OpenStack the product uses: Liberty
6.) Link to RefStack results for this product:
      2016.01 Guideline:
                https://refstack.openstack.org/#/results/c66d2ded-7b26-4e0e-8efa-dbbcd5a1526b (In Compliance)

                https://refstack.openstack.org/#/results/26ed4939-2da9-462f-b9b7-a47a663749ff
     2016.08 Guideline:
               https://refstack.openstack.org/#/results/a25bb6b0-82f7-4102-9eb0-dcb86b876cf8 (In compliance)
               https://refstack.openstack.org/#/results/cd0c2415-8284-4398-bafb-c6b352beaca0
               https://refstack.openstack.org/#/results/3f8da208-713b-443c-bf86-e0033b96c352



     * Note:
         For guideline 2016.08, two Tempest test cases were modified so that they work correctly with https requests:
                    tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_details_pagination
                    tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_pagination
        Reopened tempest bug: https://bugs.launchpad.net/tempest/+bug/1532116

7.) Workload 1: LAMP Stack with Ansible (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/ansible/lampstack)
  A.) Did the workload run successfully? Yes
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened.
        Couple of times workload failed due to ssh connection unreachable
               PLAY [setup web servers] *******************************************************
               TASK [setup] *******************************************************************
               [172.99.106.126]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
       Root cause: Common problem.
                             Cloud might not associate the floating IP or was slow to associate the floating IP after it claims that floating IP had been associated.
      Workaround: Run workload a second time without destroying the VMs, the scripts is designed to run repeatedly without ill effects
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc.
        No. Behavior seen just a couple of times.
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?
         NA

8.) Workload 2: Docker Swarm with Terraform and/ or Ansible (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/ansible/dockerswarm)
  A.) Did the workload run successfully?
           Yes, with Ansible – No issues found

  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened.
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc.
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

Regards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/defcore-committee/attachments/20161003/a202a596/attachment.html>


More information about the Defcore-committee mailing list