Hello Rodolfo,
Thanks for your support.
If the VMs are spawned without any issue, that could be a problem in the underlying network deployment. When sending traffic from a VM with an SR-IOV port, try to capture the traffic in the physical function. Please check the physical network connected to the SR-IOV network cards allow VLAN traffic in 1000:1009 (according to your configuration).
Network I have configured of VLAN type in a given range.
openstack network show 4cacd198-29c3-47b3-bf77-39f1d01c9a22 -c provider:network_type -c provider:physical_network -c provider:segmentation_id+---------------------------+-----------+| Field | Value |+---------------------------+-----------+| provider:network_type | vlan || provider:physical_network | sriovnet1 || provider:segmentation_id | 1002 |+---------------------------+-----------+
Note: I forget to add that Intra-communication between the VM's working fine deployed on same HOST but inter-communication not working.
One more thing IP address is not assigning to interfaces of VM. We have to configure static IP address manually.
Please find below incoming traffic flow on PF.
# tcpdump -i enp8s0f1tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on enp8s0f1, link-type EN10MB (Ethernet), capture size 262144 bytes12:39:28.584000 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:88:f5:3e (oui Unknown), length 28512:39:39.183491 IP6 fe80::a236:9fff:fe24:803a > ip6-allrouters: ICMP6, router solicitation, length 1612:39:49.660269 IP6 fe80::a236:9fff:fe53:e598.5240 > ff02::15a.5240: UDP, length 22112:39:49.660897 IP6 fe80::a236:9fff:fe53:e59a.5240 > ff02::15a.5240: UDP, length 22112:39:54.105381 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from a0:36:9f:d8:0d:a8 (oui Unknown), length 29312:39:54.897051 LLDP, length 4612:39:56.281671 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 3c:fd:fe:cd:06:71 (oui Unknown), length 29312:40:06.286310 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from a0:36:9f:d8:0d:aa (oui Unknown), length 29312:40:15.039416 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 3c:fd:fe:cd:06:70 (oui Unknown), length 29312:40:19.675902 IP6 fe80::a236:9fff:fe53:e598.5240 > ff02::15a.5240: UDP, length 22112:40:19.676390 IP6 fe80::a236:9fff:fe53:e59a.5240 > ff02::15a.5240: UDP, length 22112:40:25.708972 LLDP, length 4612:40:29.288242 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:c4:3f:4f (oui Unknown), length 28512:40:33.008653 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:88:f5:3e (oui Unknown), length 285
No, VMs not receiving DHCP reply, you can refer above logs. Yes, DHCP agent is on controller.Another question: do the VMs with SR-IOV port receive the DHCP reply? If so, at least you know the compute to controller communication is working (assuming the DHCP agent is on the controller).
Kindly let me know anything else is required.
Thanks!
Regards,
Harshit Mishra
From: Rodolfo Alonso Hernandez <ralonsoh@redhat.com>
Sent: Wednesday, June 22, 2022 6:46 PM
To: Harshit Mishra <harshit@voereir.com>
Subject: Re: [kolla-ansible][xena] Facing network issue when providing SR-IOV supportHello Harshit:
If the VMs are spawned without any issue, that could be a problem in the underlying network deployment. When sending traffic from a VM with an SR-IOV port, try to capture the traffic in the physical function. Please check the physical network connected to the SR-IOV network cards allow VLAN traffic in 1000:1009 (according to your configuration).
Another question: do the VMs with SR-IOV port receive the DHCP reply? If so, at least you know the compute to controller communication is working (assuming the DHCP agent is on the controller).
Regards.
On Wed, Jun 22, 2022 at 2:17 PM Harshit Mishra <harshit@voereir.com> wrote:
Hi!
I have deployed Openstack Xena using kolla-ansible (v13.0.1) on a multi-node setup (1 controller+network, multiple computes).
On this cluster I would like to have support for normal, as well as direct (SR-IOV) vNIC types.
I have done all the pre-requirement like configure VF on SR-IOV network interface.
In the current deployment, I have created two physnets, one for flat network (called physnet1 on br-ex), and one for SR-IOV (called sriovnet1 on Intel 10G 2P X520 card). I am creating one network of VXLAN type for normal vNIC, and one of VLAN type on sriovnet1 (called sriov_network) for direct vNIC.
Mapping of PF with provider in sriov-agent.ini:
# cat /etc/kolla/neutron-sriov-agent/sriov_agent.ini
[sriov_nic]
physical_device_mappings = sriovnet1:enp8s0f1
exclude_devices =
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
----------------------------------------------
ml2_conf.ini on controller node:
# cat /etc/kolla/neutron-server/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan,vlan,flat
mechanism_drivers = openvswitch,l2population,sriovnicswitch
extension_drivers = port_security
[ml2_type_vlan]
network_vlan_ranges = physnet1,sriovnet1:1000:1009
[ml2_type_flat]
flat_networks = physnet1,sriovnet1
[ml2_sriov]
agent_required = False
supported_pci_vendor_devs = 8086:10ed
[ml2_type_vxlan]
vni_ranges = 1:1000
----------------------------------------------
# grep -nr enabled_filters /etc/kolla/nova-scheduler/
/etc/kolla/nova-scheduler/nova.conf:13:enabled_filters = ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter,AggregateInstanceExtraSpecsFilter,PciPassthroughFilter
----------------------------------------------
# grep passthrough /etc/kolla/nova-compute/nova.conf
passthrough_whitelist = [{"physical_network": "sriovnet1", "devname": "enp8s0f1"}]
The interfaces on VMs from normal vNICs are able to communicate inter-compute. The same is not working for the direct vNICs belonging to the sriov_network. I have tried multiple config changes, but none seem to work.
Please help in solving this issue, or suggest a different way of achieving this if this is wrong or not optimal.
Thanks!
Regards,Harshit Mishra