<div dir="ltr"><div><div>I know that vif_type is binding_failed on a multinode setup and I also know why it happens. <br><br></div>As for interface-attach I got it work for sriov ports and even verified it works inside the instance. The trick was to specify profile with pci_slot and pci_vendor_info during port create. In case any one else wants to do this. <br><br>Thank you,<br></div>Ageeleshwar K<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 5, 2015 at 12:19 PM, Irena Berezovsky <span dir="ltr"><<a href="mailto:irenab.dev@gmail.com" target="_blank">irenab.dev@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Akilesh,<div>Please see my responses inline.</div><div>Hope this help,</div><div><br></div><div>BR,</div><div>Irena</div><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Thu, Feb 5, 2015 at 6:14 AM, Akilesh K <span dir="ltr"><<a href="mailto:akilesh1597@gmail.com" target="_blank">akilesh1597@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div>Hi Irena,<br><br></div><div>Issue 1 - I agree. You are correct.<br></div><div><br></div><div>Issue 2<br></div>The behavior you outlined<br>1. <span style="color:rgb(116,27,71)">When port is created with vnic_type=direct, the vif_type is 'unbound'.
The pci_vendor_info will be available during port update when 'nova
boot' command is invoked and PCI device is allocated. </span><br></div>This happens when the controller and compute are on the same host. Not when they are on the different host. On a multiserver setup vif_type is set to binging_failed during port create.<br><br></div></div></div></blockquote></span><div>This is strange, since port-create operation is pure neutron API call and it should not differ whether you are in the multiserver or all-in-one setup.</div><span class=""><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div></div>Second is i am not doing nova boot. Instead I am doing nova interface-attach. In this case the pci_vendor_info is not updated by anyone but me. and pci_slot is also not populated.<br><br></div></div></blockquote></span><div>interface-attach is currently not supported for SR-IOV ports. There is a proposed blueprint to support this: <a href="https://review.openstack.org/#/c/139910/" target="_blank">https://review.openstack.org/#/c/139910/</a>.</div><div>So for now, the only option to provide PCI passthrough vNIC is according to what is described in the previously referenced wiki page: create neutron port with vnic_type= direct and then 'nova boot' with pre-created port.</div><div><div class="h5"><div><br></div><div>Do you still think this is correct ?</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><br><br></div></div></div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 4, 2015 at 8:08 PM, Irena Berezovsky <span dir="ltr"><<a href="mailto:irenab.dev@gmail.com" target="_blank">irenab.dev@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Hi Akilesh,<div>please see inline<br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Wed, Feb 4, 2015 at 11:32 AM, Akilesh K <span dir="ltr"><<a href="mailto:akilesh1597@gmail.com" target="_blank">akilesh1597@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div>Hi,<br>Issue 1:<br></div>I do not understand what you mean. I did specify the physical_network. What I am trying to say is some physical networks exists only on the compute node and not on the network node. We are unable to create a network on those physnets. The work around was to fake their existance on the network node too. Which I believe is the wrong way to do.<br><br></div></div></div></div></div></div></div></blockquote></span><div>Every physical network should be defined at the Controller node, including range of segmentation ids (i.e. vlan ids) available for allocation.</div><div>When virtual network is created, you should verify that it has associated network type and segmentation id (assuming you are using provider network extension).</div><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><br></div>Issue2:<br></div>I looked directly into the code after looking at the logs. <br><br>1. What neutron (sriov mech driver ) is doing is loading the default list of 'supported_pci_vendor_devs' , then it picks up the profile->pci_vendor_info from the port defenition we sent in the port create request and checks if it is supported. If not it says 'binding_failed'<br></div></div></div></div></div></blockquote></span><div>When port is created with vnic_type=direct, the vif_type is 'unbound'. The pci_vendor_info will be available during port update when 'nova boot' command is invoked and PCI device is allocated. </div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><br></div>I am fine with this<br><br></div>2. Then when I attach the created port to a host nova's vif driver (hv_veb) is looking for profile->pci_slot in the context of the port that was supplied and fails to attach to the instance if it is not present. <br><br></div></div></div></blockquote></span><div>nova vif driver receives profile->pci_slot from neutron, but it was actually filed earlier by nova during port-update. </div><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div></div>this is what I think should be done by neutron itself. neutron's sriov mech driver should have updated the port with the pci_slot details when the port got created. and this does happen on a single machine install. We need to find why it does not happen on a multi node install, possibly because the mech driver is not running on the host with sriov devices and fix it. <br><br></div></div></blockquote></span><div>I suggest to follow <a href="https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking" style="font-size:12.8000001907349px" target="_blank">https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking</a> instructions, this should work for you.</div><div><div><div><br></div><div>I hope you guys can understand what I mean.</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><br></div>Thank you,<br>Ageeleshwar K<br><div><div><div><div><br></div></div></div></div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 4, 2015 at 2:49 PM, Itzik Brown <span dir="ltr"><<a href="mailto:itzikb@redhat.com" target="_blank">itzikb@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<br>
Issue 1;<br>
You must specify the physical networks.<br>
Please look at:<br>
<a href="https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking" target="_blank">https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking</a><b><br>
<br>
</b>Issue 2:<br>
AFAIK the agent is supported by only one vendor.<br>
Can you please look for errors in Neutron's log?<br>
<b><br>
</b>Thanks,<br>
Itzik<div><div><br>
<div>On 02/04/2015 09:12 AM, Akilesh K
wrote:<br>
</div>
</div></div><blockquote type="cite"><div><div>
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>Hi,<br>
</div>
I found two issues with the way neutron behaves on
a multi server install. I got it to work but I do
not this this is the right way to do it. It might
be a bug we might want to fix and for which I
could volunteer. <br>
<br>
</div>
<div>Setup - Multiserver juno on ubuntu.<br>
<br>
</div>
<div>Machine 1 - Controller<br>
</div>
<div>All api servers , l3, dhcp and ovs agent<br>
<br>
</div>
<div>Machine 2 - Compute <br>
</div>
<div>nova compute, neutron-ovs-agent, neutron sriov
agent.<br>
</div>
<div><br>
</div>
<div><br>
</div>
Issue 1:<br>
<br>
</div>
Controller node has physnets 'External', 'Internal'
configured in ml2<br>
<br>
</div>
Compute node has physnets 'Internal', 'Physnet1',
'Physnet2' configured in ml2<br>
<br>
</div>
When I do neutron net-create --provider:physicalnetwork
Physnet1, It complains that 'Physnet1' is not available. <br>
<br>
</div>
Offcourse its not available on the controller but is
available on the compute node and there is no way to tell
neutron to host that network on compute node alone<br>
<br>
</div>
Work around<br>
I had to include 'Physnet1' in the controller node also to get
it to work, except that there is not bridge mapings for this
physnet.<br>
<br>
<br>
</div>
Issue 2:<br>
<div>
<div>
<div><br>
</div>
<div>This is related to sriov agent. This agent is
configured only on the compute node as that node alone has
supported devices. <br>
<br>
</div>
<div>When I do a port create --binding:vnic_type direct
--binding:host_id <compute node> The port is created
but with binding:vif_type: <b>'binding-failed'</b>. and
naturally I could not attach it to any instance.<br>
<br>
</div>
<div>I looked at the code and figured out that neutron api
is expecting binding:profile also in the format <br>
{"pci_slot": "0000:03:10.1", "pci_vendor_info":
"8086:10ed"}<br>
<br>
</div>
<div>Is this how it should be. Because on a single machine
install I did not have to do this. However on a
multiserver I had to even give the pci address is the
exact format to get it to work. <br>
<br>
</div>
<div>I have a serious feeling that this could be lot simpler
if neutron could take care of finding the details in a
smart way rather than relying on the administrator to find
which device is available and configure it. <br>
<br>
<br>
</div>
<div>Note:<br>
</div>
<div>1. If I can get some expert advice I can fix both
these. <br>
</div>
<div>2. I am not sure if this question should rather be sent
to openstack-dev group. Let me know.<br>
</div>
<div><br>
<br>
</div>
<div>Thank you,<br>
</div>
<div>Ageeleshwar K<br>
<br>
<br>
</div>
<div><br>
<br>
<br>
</div>
<div><br>
<br>
<br>
<br>
</div>
</div>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</div></div><pre>_______________________________________________
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
</pre>
</blockquote>
<br>
</div>
</blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
<br></blockquote></div></div></div><br></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>