1.) Your name: Xiangfei Zhu and Mark Voelker 2.) Your email: xiangfeiz@vmware.com and mvoelker@vmware.com 3.) Name and version (if applicable) of the product you tested: VMware Integrated OpenStack 2.5 (http://www.openstack.org/marketplace/distros/distribution/vmware/vmware-integrated-openstack) 4.) Version of OpenStack the product uses: Kilo 5.) Link to RefStack results for this product: https://refstack.openstack.org/#/results/fc80592b-4503-481c-8aa6-49d414961f2d 6.) Workload 1: LAMP Stack with Ansible (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/ansible/lampstack) A.) Did the workload run successfully? No B.) If not, did you encounter any end-user visible error messages? Please copy/paste them here and provide any context you think would help us understand what happened. The workload code expects the attached volume's block device name to be /dev/vdb but it is /dev/sdb in our environment. C.) Were you able to determine why the workload failed on this product? If so, please describe. Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. This seems to be a bad assumption in the workload code: it assumes the volume will appear as /dev/vdb on the guest OS. The block device name is actually not guaranteed (even if it is specified by the user in the nova volume-attache call) and is a function of both the hypervisor (e.g. how it presents the block device to the guest) and the guest OS (e.g. how it names devices, what other devices are present, etc). The upstream documentation suggests that it is better to attach the device without assuming a particular name and then discover the device by rescanning the bus in the guest: http://docs.openstack.org/developer/nova/block_device_mapping.html Therefore it would probably be a best practice to at least parameterize this rather than assuming /dev/vdb. Ideally we'd actually be able to discover the correct block device name in the workload code, as even parameterization could fail in some cases. D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product? Can you describe what would need to be done? Make the device name configurable or discover it instead of looking for a particular block device name. The former is quite trivial to implement: basically add a parameter and replace all instances of /dev/vdb with the param name in main.yml. 7.) Workload 2: Docker Swarm with Terraform (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/terraform/dockerswarm-coreos) A.) Did the workload run successfully? No B.) If not, did you encounter any end-user visible error messages? Please copy/paste them here and provide any context you think would help us understand what happened. Docker.service and swarm services failed to start. C.) Were you able to determine why the workload failed on this product? If so, please describe. Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. The nova instance created in VIO has network device name “ens32” rather than “eth0” as the workload code assumes. This is similar to the problem we found with the LAMP workload and block devices: the workload code probably should not assume device names if it aims to be portable. Here, the NIC device name is actually a function of the hypervisor and how the guest OS names devices. NIC naming is particularly contentious just now since predictable network interface names [1] were only fairly recently introduced in mainline distributions and therefore many guest images probably still use "classical" NIC names (e.g. "eth0") instead. It's worth noting that even if you assume a particular guest image (e.g. a particular version of CoreOS), the NIC name still depends partly on the hypervisor. In VIO 2.5, ESXi presents the NIC to the guest as a hotpluggable PCI device (hence the NIC name starting with "ens"). Other hypervisors might present it differently and thus cause the same OS/udev/systemd to use a different NIC name. [1] https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/ D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product? Can you describe what would need to be done? It's trivial to get this workload running. Per above, at a minimum, the workload should not assume the NIC will be eth0: making the NIC name a parameter the user can set would be sufficient to get it running for us. A trivial change to terraform/dockerswarm-coreos/templates/10-docker-service.conf solves that sufficiently. Ideally, the code could discover the correct NIC rather than making the user set it.