- In my test, I found after first three command (which is "openstack network create ...", "openstack subnet create", "openstack port create ..."), there are network topology exist in DPU side, and there are rules exist in OVN north DB, south DB of controller, like this:
```
root@c1:~# ovn-nbctl show
switch 9bdacdd4-ca2a-4e35-82ca-0b5fbd3a5976 (neutron-066c8dc2-c98b-4fb8-a541-8b367e8f6e69) (aka selfservice)
port 01a68701-0e6a-4c30-bfba-904d1b9813e1
addresses: ["unknown"]
port 18a44c6f-af50-4830-ba86-54865abb60a1 (aka pf0vf1)
addresses: ["fa:16:3e:13:36:e2 172.1.1.228"]
gyw@c1:~$ sudo ovn-sbctl list Port_Binding
_uuid : 61dc8bc0-ab33-4d67-ac13-0781f89c905a
chassis : []
datapath : 91d3509c-d794-496a-ba11-3706ebf143c8
encap : []
external_ids : {name=pf0vf1, "neutron:cidrs"="
172.1.1.241/24", "neutron:device_id"="", "neutron:device_owner"="", "neutron:network_name"=neutron-066c8dc2-c98b-4fb8-a541-8b367e8f6e69, "neutron:port_name"=pf0vf1, "neutron:project_id"="512866f9994f4ad8916d8539a7cdeec9", "neutron:revision_number"="1", "neutron:security_group_ids"="de8883e8-ccac-4be2-9bb2-95e732b0c114"}
root@c1c2dpu:~# sudo ovs-vsctl show
62cf78e5-2c02-471e-927e-1d69c2c22195
Bridge br-int
fail_mode: secure
datapath_type: system
Port br-int
Interface br-int
type: internal
Port ovn--1
Interface ovn--1
type: geneve
options: {csum="true", key=flow, remote_ip="172.168.2.98"}
Port pf0vf1
Interface pf0vf1
ovs_version: "2.17.2-24a81c8"
```
That's why I guess "first three command" has already create network topology, and "openstack server create" command only need to plug VF into VM in HOST SIDE, DO NOT CALL NEUTRON. As network has already done.
- In my test, then I run "openstack server create" command, I got ERROR which said
"No valid host...", which is what the email said above.
The reason has already said, it's nova-scheduler's PCI filter module report no valid host. The reason "nova-scheduler's PCI filter module report no valid host" is nova-scheduler could NOT see PCI information of compute node. The reason "nova-scheduler could NOT see PCI information of compute node" is compute node's /etc/nova/nova.conf configure remote_managed tag like this:
```
[pci]
passthrough_whitelist = {"vendor_id": "15b3", "product_id": "101e", "physical_network": null, "remote_managed": "true"}
alias = { "vendor_id":"15b3", "product_id":"101e", "device_type":"type-VF", "name":"a1" }
```
2) Discuss some detail design of "remote_managed" tag, I don't know if this is right in the design of openstack with DPU:
- In neutron-server side, use remote_managed tag in "openstack port create ..." command.
This command will make neutron-server / OVN / ovn-controller / ovs to make the network topology done, like above said.
I this this is right, because test shows that.
- In nova side, there are 2 things should process, first is PCI passthrough filter, second is nova-compute to plug VF into VM.
If the link above is right, which remote_managed tag exists in /etc/nova/nova.conf of controller node and exists in /etc/nova/nova.conf of compute node.
As above ("- In my test, then I run "openstack server create" command") said, got ERROR in this step.
So what should do in "PCI passthrough filter" ? How to configure ?
Then, if "PCI passthrough filter" stage pass, what will do of nova-compute in compute node?
- build openstack physical env, link plug DPU into compute mode, use VM as controller ... etc.
- build openstack nova, neutron, ovn, ovn-vif, ovs follow that link.
- configure DPU side /etc/neutron/neutron.conf
- configure host side /etc/nova/nova.conf
- configure host side /etc/nova/nova-compute.conf
- run first 3 command
- last, run this command, got ERROR