<div dir="ltr"><div><div><div><div><div><div><div>Hi,<br>Issue 1:<br></div>I do not understand what you mean. I did specify the physical_network. What I am trying to say is some physical networks exists only on the compute node and not on the network node. We are unable to create a network on those physnets. The work around was to fake their existance on the network node too. Which I believe is the wrong way to do.<br><br><br></div>Issue2:<br></div>I looked directly into the code after looking at the logs. <br><br>1. What neutron (sriov mech driver ) is doing is loading the default list of 'supported_pci_vendor_devs' , then it picks up the profile->pci_vendor_info from the port defenition we sent in the port create request and checks if it is supported. If not it says 'binding_failed'<br><br></div>I am fine with this<br><br></div>2. Then when I attach the created port to a host nova's vif driver (hv_veb) is looking for profile->pci_slot in the context of the port that was supplied and fails to attach to the instance if it is not present. <br><br></div>this is what I think should be done by neutron itself. neutron's sriov mech driver should have updated the port with the pci_slot details when the port got created. and this does happen on a single machine install. We need to find why it does not happen on a multi node install, possibly because the mech driver is not running on the host with sriov devices and fix it. <br><br></div><div>I hope you guys can understand what I mean.<br></div><div><br></div>Thank you,<br>Ageeleshwar K<br><div><div><div><div><br></div></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 4, 2015 at 2:49 PM, Itzik Brown <span dir="ltr"><<a href="mailto:itzikb@redhat.com" target="_blank">itzikb@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<br>
Issue 1;<br>
You must specify the physical networks.<br>
Please look at:<br>
<a href="https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking" target="_blank">https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking</a><b><br>
<br>
</b>Issue 2:<br>
AFAIK the agent is supported by only one vendor.<br>
Can you please look for errors in Neutron's log?<br>
<b><br>
</b>Thanks,<br>
Itzik<div><div class="h5"><br>
<div>On 02/04/2015 09:12 AM, Akilesh K
wrote:<br>
</div>
</div></div><blockquote type="cite"><div><div class="h5">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>Hi,<br>
</div>
I found two issues with the way neutron behaves on
a multi server install. I got it to work but I do
not this this is the right way to do it. It might
be a bug we might want to fix and for which I
could volunteer. <br>
<br>
</div>
<div>Setup - Multiserver juno on ubuntu.<br>
<br>
</div>
<div>Machine 1 - Controller<br>
</div>
<div>All api servers , l3, dhcp and ovs agent<br>
<br>
</div>
<div>Machine 2 - Compute <br>
</div>
<div>nova compute, neutron-ovs-agent, neutron sriov
agent.<br>
</div>
<div><br>
</div>
<div><br>
</div>
Issue 1:<br>
<br>
</div>
Controller node has physnets 'External', 'Internal'
configured in ml2<br>
<br>
</div>
Compute node has physnets 'Internal', 'Physnet1',
'Physnet2' configured in ml2<br>
<br>
</div>
When I do neutron net-create --provider:physicalnetwork
Physnet1, It complains that 'Physnet1' is not available. <br>
<br>
</div>
Offcourse its not available on the controller but is
available on the compute node and there is no way to tell
neutron to host that network on compute node alone<br>
<br>
</div>
Work around<br>
I had to include 'Physnet1' in the controller node also to get
it to work, except that there is not bridge mapings for this
physnet.<br>
<br>
<br>
</div>
Issue 2:<br>
<div>
<div>
<div><br>
</div>
<div>This is related to sriov agent. This agent is
configured only on the compute node as that node alone has
supported devices. <br>
<br>
</div>
<div>When I do a port create --binding:vnic_type direct
--binding:host_id <compute node> The port is created
but with binding:vif_type: <b>'binding-failed'</b>. and
naturally I could not attach it to any instance.<br>
<br>
</div>
<div>I looked at the code and figured out that neutron api
is expecting binding:profile also in the format <br>
{"pci_slot": "0000:03:10.1", "pci_vendor_info":
"8086:10ed"}<br>
<br>
</div>
<div>Is this how it should be. Because on a single machine
install I did not have to do this. However on a
multiserver I had to even give the pci address is the
exact format to get it to work. <br>
<br>
</div>
<div>I have a serious feeling that this could be lot simpler
if neutron could take care of finding the details in a
smart way rather than relying on the administrator to find
which device is available and configure it. <br>
<br>
<br>
</div>
<div>Note:<br>
</div>
<div>1. If I can get some expert advice I can fix both
these. <br>
</div>
<div>2. I am not sure if this question should rather be sent
to openstack-dev group. Let me know.<br>
</div>
<div><br>
<br>
</div>
<div>Thank you,<br>
</div>
<div>Ageeleshwar K<br>
<br>
<br>
</div>
<div><br>
<br>
<br>
</div>
<div><br>
<br>
<br>
<br>
</div>
</div>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</div></div><pre>_______________________________________________
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
</pre>
</blockquote>
<br>
</div>
</blockquote></div><br></div>