[Openstack] Issue with assignment of Intel’s QAT Card to VM (PCI-passthrough) using openstack-mitaka release on Cent OS 7.2 host
Alex
fr487x at gmail.com
Wed Jun 8 04:48:55 UTC 2016
Hi Chinmaya,
Are you sure that IOMMU is enabled? That’s how it looks like when it’s on:
alex at homelab:~$
alex at homelab:~$ dmesg | grep -e DMAR -e IOMMU
[ 0.000000] ACPI: DMAR 00000000bedb3008 0000B0 (v01 INTEL DQ67SW 00000001 INTL 00000001)
[ 0.000000] Intel-IOMMU: enabled
[ 0.037894] dmar: IOMMU 0: reg_base_addr fed91000 ver 1:0 cap c9008020660262 ecap f0105a
[ 0.037984] IOAPIC id 0 under DRHD base 0xfed91000 IOMMU 0
[ 1.063541] DMAR: No ATSR found
[ 1.063570] IOMMU 0 0xfed91000: using Queued invalidation
[ 1.063573] IOMMU: Setting RMRR:
[ 1.063587] IOMMU: Setting identity map for device 0000:00:1d.0 [0xbf0ac000 - 0xbf0bafff]
[ 1.063623] IOMMU: Setting identity map for device 0000:00:1a.0 [0xbf0ac000 - 0xbf0bafff]
[ 1.063639] IOMMU: Prepare 0-16MiB unity mapping for LPC
[ 1.063649] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[ 12.943351] vboxpci: IOMMU found
alex at homelab:~$
alex at homelab:~$ ls /sys/kernel/iommu_groups/
0 1 2 3 4 5 6 7 8 9
alex at homelab:~$
It seems like PCI-Passthrough doesn’t work without IOMMU (intel_iommu=on in cmdline).
> On Jun 7, 2016, at 7:13 PM, Moshe Levi <moshele at mellanox.com> wrote:
>
>
>
> From: Chinmaya Dwibedy [mailto:ckdwibedy at gmail.com <mailto:ckdwibedy at gmail.com>]
> Sent: Tuesday, June 07, 2016 9:22 AM
> To: Moshe Levi <moshele at mellanox.com <mailto:moshele at mellanox.com>>
> Cc: openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
> Subject: Re: [Openstack] Issue with assignment of Intel’s QAT Card to VM (PCI-passthrough) using openstack-mitaka release on Cent OS 7.2 host
>
>
> Hi Moshe,
>
> Thank you for your response. Did not find any errors in nova-compute log. Here goes the output.
>
> 2016-06-07 02:10:45.181 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Auditing locally available compute resources for node localhost
> 2016-06-07 02:10:45.618 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Total usable vcpus: 36, total allocated vcpus: 8
> 2016-06-07 02:10:45.618 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Final resource view: name=localhost phys_ram=128657MB used_ram=16896MB phys_disk=49GB used_disk=160GB total_vcpus=36 used_vcpus=8 pci_stats=[PciDevicePool(count=2,numa_node=1,product_id='0435',tags={dev_type='type-PF'},vendor_id='8086')]
> 2016-06-07 02:10:45.692 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Compute_service record updated for localhost:localhost
> 2016-06-07 02:11:45.182 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Auditing locally available compute resources for node localhost
> 2016-06-07 02:11:45.631 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Total usable vcpus: 36, total allocated vcpus: 8
> 2016-06-07 02:11:45.632 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Final resource view: name=localhost phys_ram=128657MB used_ram=16896MB phys_disk=49GB used_disk=160GB total_vcpus=36 used_vcpus=8 pci_stats=[PciDevicePool(count=2,numa_node=1,product_id='0435',tags={dev_type='type-PF'},vendor_id='8086')]
> 2016-06-07 02:11:45.704 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Compute_service record updated for localhost:localhost
> 2016-06-07 02:12:47.181 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Auditing locally available compute resources for node localhost
> 2016-06-07 02:12:47.632 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Total usable vcpus: 36, total allocated vcpus: 8
> 2016-06-07 02:12:47.632 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Final resource view: name=localhost phys_ram=128657MB used_ram=16896MB phys_disk=49GB used_disk=160GB total_vcpus=36 used_vcpus=8 pci_stats=[PciDevicePool(count=2,numa_node=1,product_id='0435',tags={dev_type='type-PF'},vendor_id='8086')]
> 2016-06-07 02:12:47.704 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Compute_service record updated for localhost:localhost
> 2016-06-07 02:13:48.180 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Auditing locally available compute resources for node localhost
> 2016-06-07 02:13:48.632 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Total usable vcpus: 36, total allocated vcpus: 8
> 2016-06-07 02:13:48.632 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Final resource view: name=localhost phys_ram=128657MB used_ram=16896MB phys_disk=49GB used_disk=160GB total_vcpus=36 used_vcpus=8 pci_stats=[PciDevicePool(count=2,numa_node=1,product_id='0435',tags={dev_type='type-PF'},vendor_id='8086')]
> 2016-06-07 02:13:48.701 118672 INFO nova.compute.resource_tracker [req-5f0325a6-2dd8-4a25-b518-9143cc0aac0c - - - - -] Compute_service record updated for localhost:localhost
>
>
> The below output of nova-scheduler log says that, the PciPassthroughFilter has been passed.
> The output below say that Filter PciPassthroughFilter returned 0 hosts which mean it didn’t pass
>
>
> 2016-06-07 02:12:26.121 130792 INFO nova.service [-] Starting scheduler node (version 13.0.0-1.el7)
> 2016-06-07 02:13:12.449 130792 INFO nova.scheduler.host_manager [req-3500a771-cfb2-4b0a-95c4-bf05542a210a - - - - -] Successfully synced instances from host 'localhost'.
> 2016-06-07 02:14:09.455 130792 WARNING nova.scheduler.host_manager [req-9ab12b26-5e41-4365-b68c-76d177437434 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Host localhost has more disk space than database expected (25 GB > -111 GB)
> 2016-06-07 02:14:09.457 130792 INFO nova.filters [req-9ab12b26-5e41-4365-b68c-76d177437434 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filter PciPassthroughFilter returned 0 hosts
> 2016-06-07 02:14:09.457 130792 INFO nova.filters [req-9ab12b26-5e41-4365-b68c-76d177437434 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filtering removed all hosts for the request with instance ID '7f108c1b-7df5-4336-925e-60e202f38dfc'. Filter results: ['RetryFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'RamFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'CoreFilter: (start: 1, end: 1)', 'PciPassthroughFilter: (start: 1, end: 0)']
> 2016-06-07 02:15:17.430 130792 INFO nova.scheduler.host_manager [req-2423445e-f1f1-4d75-9ce8-6d874ea9b27e - - - - -] Successfully synced instances from host 'localhost'.
> ~
>
>
>
> Regards,
> Chinmaya
>
> On Tue, Jun 7, 2016 at 11:27 AM, Moshe Levi <moshele at mellanox.com <mailto:moshele at mellanox.com>> wrote:
> Can you look if you have errors in the nova-computes log?
>
> Also check if you passed the PciPassthroughFilter in the nova-scheduler log
>
>
> From: Chinmaya Dwibedy [mailto:ckdwibedy at gmail.com <mailto:ckdwibedy at gmail.com>]
> Sent: Tuesday, June 07, 2016 8:29 AM
> To: openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
> Subject: [Openstack] Issue with assignment of Intel’s QAT Card to VM (PCI-passthrough) using openstack-mitaka release on Cent OS 7.2 host
>
>
> Hi All,
>
> I want the Intel’s QAT Card to be used for PCI Passthrough device. But to implement PCI-passthrough, when I launch a VM using a flavor configured for passthrough, it gives the below errors in nova-conductor.log and instance goes into Error state. Note that, I have installed
> openstack-mitaka release on host (Cent OS 7.2). Can anyone please have a look into the below stated and let me know if I have missed anything or done anything wrong? Thank you in advance for your support and time.
>
>
> When I create an instance, this error is output in nova-conductor.log.
>
> 2016-06-06 05:42:42.005 4898 WARNING nova.scheduler.utils [req-94484e27-1998-4e3a-8aa8-06805613ae65 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
> Traceback (most recent call last):
>
> File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 150, in inner
> return func(*args, **kwargs)
>
> File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations
> dests = self.driver.select_destinations(ctxt, spec_obj)
>
> File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations
> raise exception.NoValidHost(reason=reason)
>
> NoValidHost: No valid host was found. There are not enough hosts available.
>
> 2016-06-06 05:42:42.006 4898 WARNING nova.scheduler.utils [req-94484e27-1998-4e3a-8aa8-06805613ae65 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] [instance: f1db1cce-0777-4f0e-a141-4b278c2d98b4] Setting instance to ERROR state
>
> In order to assign Intel’s QAT Card to VMs , followed below procedures
>
> Using the PCI bus ID, found out the product id
>
>
> 1) [root at localhost ~(keystone_admin)]# lspci -nn | grep QAT
> 83:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435]
> 88:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435]
>
>
> [root at localhost ~(keystone_admin)]# cat /sys/bus/pci/devices/0000:83:00.0/device
> 0x0435
> [root at localhost ~(keystone_admin)]# cat /sys/bus/pci/devices/0000:88:00.0/device
> 0x0435
> [root at localhost ~(keystone_admin)]#
>
>
> [root at localhost ~(keystone_admin)]#
>
> 2) Configured the below stated in nova.conf
>
>
> pci_alias = {"name": "QuickAssist", "product_id": "0435", "vendor_id": "8086", "device_type": "type-PCI"}
> pci_passthrough_whitelist = [{"vendor_id":"8086","product_id":"0435"}]
>
> scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,PciPassthroughFilter
> scheduler_available_filters=nova.scheduler.filters.all_filter
>
> 3) service openstack-nova-api restart
> 4) systemctl restart openstack-nova-compute
>
> 5) [root at localhost ~(keystone_admin)]# nova flavor-list
> +--------------------------------------+-----------------+-----------+------+-----------+------+-------+-------------+-----------+
> | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
> +--------------------------------------+-----------------+-----------+------+-----------+------+-------+-------------+-----------+
> | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
> | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
> | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
> | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
> +--------------------------------------+-----------------+-----------+------+-----------+------+-------+-------------+-----------+
> [root at localhost ~(keystone_admin)]#
>
> 6) nova flavor-key 4 set "pci_passthrough:alias"="QuickAssist:1"
>
> 7) [root at localhost ~(keystone_admin)]# nova flavor-show 4
> +----------------------------+--------------------------------------------+
> | Property | Value |
> +----------------------------+--------------------------------------------+
> | OS-FLV-DISABLED:disabled | False |
> | OS-FLV-EXT-DATA:ephemeral | 0 |
> | disk | 80 |
> | extra_specs | {"pci_passthrough:alias": "QuickAssist:1"} |
> | id | 4 |
> | name | m1.large |
> | os-flavor-access:is_public | True |
> | ram | 8192 |
> | rxtx_factor | 1.0 |
> | swap | |
> | vcpus | 4 |
> +----------------------------+--------------------------------------------+
> [root at localhost ~(keystone_admin)]#
>
>
>
> 8) [root at localhost ~(keystone_admin)]# nova boot --flavor 4 --key_name oskey1 --image bc859dc5-103b-428b-814f-d36e59009454 --nic net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be --user-data=./myfile.txt TEST
> WARNING: Option "--key_name" is deprecated; use "--key-name"; this option will be removed in novaclient 3.3.0.
> +--------------------------------------+--------------------------------------------------------------------------+
> | Property | Value |
> +--------------------------------------+--------------------------------------------------------------------------+
> | OS-DCF:diskConfig | MANUAL |
> | OS-EXT-AZ:availability_zone | |
> | OS-EXT-SRV-ATTR:host | - |
> | OS-EXT-SRV-ATTR:hypervisor_hostname | - |
> | OS-EXT-SRV-ATTR:instance_name | instance-00000026 |
> | OS-EXT-STS:power_state | 0 |
> | OS-EXT-STS:task_state | scheduling |
> | OS-EXT-STS:vm_state | building |
> | OS-SRV-USG:launched_at | - |
> | OS-SRV-USG:terminated_at | - |
> | accessIPv4 | |
> | accessIPv6 | |
> | adminPass | 7ZKdcaQut7gu |
> | config_drive | |
> | created | 2016-06-06T09:42:41Z |
> | flavor | m1.large (4) |
> | hostId | |
> | id | f1db1cce-0777-4f0e-a141-4b278c2d98b4 |
> | image | Benu-vMEG-Dev-M.0.0.0-160525-1347 (bc859dc5-103b-428b-814f-d36e59009454) |
> | key_name | oskey1 |
> | metadata | {} |
> | name | TEST |
> | os-extended-volumes:volumes_attached | [] |
> | progress | 0 |
> | security_groups | default |
> | status | BUILD |
> | tenant_id | 4bc608763cee41d9a8df26d3ef919825 |
> | updated | 2016-06-06T09:42:41Z |
> | user_id | 266f5859848e4f39b9725203dda5c3f2 |
> +--------------------------------------+--------------------------------------------------------------------------+
> [root@ localhost ~(keystone_admin)]#
>
> .
>
>
> 9) MariaDB [nova]> select * from pci_devices;
> +---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+---------------+------------+-----------+-------------+
> | created_at | updated_at | deleted_at | deleted | id | compute_node_id | address | product_id | vendor_id | dev_type | dev_id | label | status | extra_info | instance_uuid | request_id | numa_node | parent_addr |
> +---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+---------------+------------+-----------+-------------+
> | 2016-06-03 12:01:45 | 2016-06-06 09:46:35 | NULL | 0 | 1 | 1 | 0000:83:00.0 | 0435 | 8086 | type-PF | pci_0000_83_00_0 | label_8086_0435 | available | {} | NULL | NULL | 1 | NULL |
> | 2016-06-03 12:01:45 | 2016-06-06 09:46:35 | NULL | 0 | 2 | 1 | 0000:88:00.0 | 0435 | 8086 | type-PF | pci_0000_88_00_0 | label_8086_0435 | available | {} | NULL | NULL | 1 | NULL |
> +---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+---------------+------------+-----------+-------------+
> 2 rows in set (0.00 sec)
>
> MariaDB [nova]>
>
> [root at localhost ~(keystone_admin)]# dmesg | grep -e DMAR -e IOMMU
> [ 0.000000] ACPI: DMAR 000000007b69a000 00130 (v01 INTEL S2600WT 00000001 INTL 20091013)
> [ 0.128779] dmar: IOMMU 0: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0466 ecap f020de
> [ 0.128785] dmar: IOMMU 1: reg_base_addr c7ffc000 ver 1:0 cap d2078c106f0466 ecap f020de
> [ 0.128911] IOAPIC id 10 under DRHD base 0xfbffc000 IOMMU 0
> [ 0.128912] IOAPIC id 8 under DRHD base 0xc7ffc000 IOMMU 1
> [ 0.128913] IOAPIC id 9 under DRHD base 0xc7ffc000 IOMMU 1
> [root@ localhost ~(keystone_admin)]#
>
> [root at localhost ~(keystone_admin)]# lscpu | grep Virtualization
> Virtualization: VT-x
> [root at localhost ~(keystone_admin)]#
>
> [root at localhost ~(keystone_admin)]# nova service-list
> +----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
> | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
> +----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
> | 9 | nova-cert | localhost | internal | enabled | up | 2016-06-07T04:58:28.000000 | - |
> | 10 | nova-consoleauth | localhost | internal | enabled | up | 2016-06-07T04:58:30.000000 | - |
> | 11 | nova-scheduler | localhost | internal | enabled | up | 2016-06-07T04:58:30.000000 | - |
> | 12 | nova-conductor | localhost | internal | enabled | up | 2016-06-07T04:58:29.000000 | - |
> | 18 | nova-compute | localhost | nova | enabled | up | 2016-06-07T04:58:29.000000 | - |
> +----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
> [root at localhost ~(keystone_admin)]#
>
>
> [root at localhost ~(keystone_admin)]# nova host-list
> +-----------+-------------+----------+
> | host_name | service | zone |
> +-----------+-------------+----------+
> | localhost | cert | internal |
> | localhost | consoleauth | internal |
> | localhost | scheduler | internal |
> | localhost | conductor | internal |
> | localhost | compute | nova |
> +-----------+-------------+----------+
> [root at localhost ~(keystone_admin)]#
>
> [root at localhost ~(keystone_admin)]# neutron agent-list
> +--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+
> | id | agent_type | host | availability_zone | alive | admin_state_up | binary |
> +--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+
> | 0e81d20f-b41d-490a-966a-7171880963b9 | Metadata agent | localhost | | :-) | True | neutron-metadata-agent |
> | 2ccb17dc-35d8-41cc-8e5d-83496a7e26b0 | Metering agent | localhost | | :-) | True | neutron-metering-agent |
> | 6fef2fa7-2479-4d45-889c-b38b854ac3e3 | DHCP agent | localhost | nova | :-) | True | neutron-dhcp-agent |
> | 87c976cc-e3cd-4818-aa4f-ee599bf812b1 | L3 agent | localhost | nova | :-) | True | neutron-l3-agent |
> | aeb4f399-2281-4ad3-b880-802812910ec8 | Open vSwitch agent | localhost | | :-) | True | neutron-openvswitch-agent |
> +--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+
> [root at localhost ~(keystone_admin)]#
>
> [root at localhost ~(keystone_admin)]# nova hypervisor-list
> +----+---------------------+-------+---------+
> | ID | Hypervisor hostname | State | Status |
> +----+---------------------+-------+---------+
> | 1 | localhost | up | enabled |
> +----+---------------------+-------+---------+
> [root at localhost ~(keystone_admin)]#
>
> [root at localhost ~(keystone_admin)]# grep ^virt_type /etc/nova/nova.conf
> virt_type=kvm
> [root at localhost ~(keystone_admin)]#
>
> [root at localhost ~(keystone_admin)]# grep ^compute_driver /etc/nova/nova.conf
> compute_driver=libvirt.LibvirtDriver
> [root at localhost ~(keystone_admin)]#
> [root at localhost ~(keystone_admin)]#
>
> Regards,
> Chinmaya
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
> Post to : openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
--
Alexey
fr487x at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160608/10326109/attachment.html>
More information about the Openstack
mailing list