<div dir="ltr">Sounds good!. Thanks for the clarification.<div><br></div><div>Best regards,</div><div>Hongbin</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 13, 2016 at 1:43 PM, Antoni Segura Puimedon <span dir="ltr"><<a href="mailto:celebdor@gmail.com" target="_blank">celebdor@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Tue, Sep 13, 2016 at 5:05 PM, Hongbin Lu <<a href="mailto:hongbin034@gmail.com">hongbin034@gmail.com</a>> wrote:<br>
><br>
><br>
> On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary<br>
> <<a href="mailto:choudharyvikas16@gmail.com">choudharyvikas16@gmail.com</a>> wrote:<br>
>><br>
>><br>
>><br>
>> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu <<a href="mailto:hongbin034@gmail.com">hongbin034@gmail.com</a>> wrote:<br>
>>><br>
>>> Ivan,<br>
>>><br>
>>> Thanks for the proposal. From Magnum's point of view, this proposal<br>
>>> doesn't seem to require to store neutron/rabbitmq credentials in tenant VMs<br>
>>> which is more desirable. I am looking forward to the PoC.<br>
>><br>
>><br>
>> Hogbin, Can you please elaborate on this will not require to store neutron<br>
>> credentials?<br>
>> For example in libnetwork case, neutron's commands like "show_port" and<br>
>> "update_port" will still need to be invoked from inside VM.<br>
><br>
><br>
> In a typical COE cluster, there are master nodes and work (minion/slave)<br>
> nodes. Regarding to credentials, the following is optimal:<br>
> * Avoid storing credentials in work nodes. If credentials have to be stored,<br>
> move them to master nodes if we can (containers are running in work nodes so<br>
> credentials stored there have a higher risk). A question for you, neutron's<br>
> commands like "show_port" and "update_port" need to be invoked from work<br>
> nodes or master nodes?<br>
> * If credentials have to be stored, scope them with least privilege (Magnum<br>
> uses Keystone trust for this purpose).<br>
<br>
</span>I think that with the ipvlan proposal you probably can do without having to call<br>
those two. IIUC the proposal the binding on the VM, taking libnetwork<br>
as an example<br>
would be:<br>
<br>
1. docker sends a request to kuryr-libnetwork running in container-in-vm mode.<br>
2. kuryr-libnetwork forwards the request to a kuryr daemon that has<br>
the necessary<br>
credentials to talk to neutron (it could run either in the master node<br>
or in the compute<br>
node just like there is the dhcp agent, i.e., with one foot on the VM<br>
network and one<br>
on the underlay).<br>
3. The kuryr daemon does the address pair proposal requests to Neutron<br>
and returns<br>
the result to the kuryr-libnetwork in the VM, at which point the VM<br>
port can already<br>
send and receive data for the container.<br>
4. kuryr-libnetwork in the VM creates an ipvlan virtual device and<br>
puts it the IP<br>
returned by the kuryr daemon.<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
>><br>
>><br>
>> Overall I liked this approach given its simplicity over vlan-aware-vms.<br>
>><br>
>> -VikasC<br>
>>><br>
>>><br>
>>> Best regards,<br>
>>> Hongbin<br>
>>><br>
>>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan <<a href="mailto:ivan.coughlan@intel.com">ivan.coughlan@intel.com</a>><br>
>>> wrote:<br>
>>>><br>
>>>><br>
>>>><br>
>>>> Overview<br>
>>>><br>
>>>> Kuryr proposes to address the issues of double encapsulation and<br>
>>>> exposure of containers as neutron entities when containers are running<br>
>>>> within VMs.<br>
>>>><br>
>>>> As an alternative to the vlan-aware-vms and use of ovs within the VM, we<br>
>>>> propose to:<br>
>>>><br>
>>>> - Use allowed-address-pairs configuration for the VM neutron<br>
>>>> port<br>
>>>><br>
>>>> - Use IPVLAN for wiring the Containers within VM<br>
>>>><br>
>>>><br>
>>>><br>
>>>> In this way:<br>
>>>><br>
>>>> - Achieve efficient data path to container within VM<br>
>>>><br>
>>>> - Better leverage OpenStack EPA(Enhanced Platform Awareness)<br>
>>>> features to accelerate the data path (more details below)<br>
>>>><br>
>>>> - Mitigate the risk of vlan-aware-vms not making neutron in<br>
>>>> time<br>
>>>><br>
>>>> - Provide a solution that works on existing and previous<br>
>>>> openstack releases<br>
>>>><br>
>>>><br>
>>>><br>
>>>> This work should be done in a way permitting the user to optionally<br>
>>>> select this feature.<br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> Required Changes<br>
>>>><br>
>>>> The four main changes we have identified in the current kuryr codebase<br>
>>>> are as follows:<br>
>>>><br>
>>>> · Introduce an option of enabling “IPVLAN in VM” use case. This<br>
>>>> can be achieved by using a config file option or possibly passing a command<br>
>>>> line argument. The IPVLAN master interface must also be identified.<br>
>>>><br>
>>>> · If using “IPVLAN in VM” use case, Kuryr should no longer<br>
>>>> create a new port in Neutron or the associated VEth pairs. Instead, Kuryr<br>
>>>> will create a new IPVLAN slave interface on top of the VM’s master interface<br>
>>>> and pass this slave interface to the Container netns.<br>
>>>><br>
>>>> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be<br>
>>>> identified so we can associate the additional IPVLAN addresses with the<br>
>>>> port. This can be achieved by querying Neutron’s show-port function and<br>
>>>> passing the VMs IP address.<br>
>>>><br>
>>>> · If using “IPVLAN in VM” use case, Kuryr should associate the<br>
>>>> additional IPVLAN addresses with the VMs port. This can be achieved using<br>
>>>> Neutron’s allowed-address-pairs flag in the port-update function. We intend<br>
>>>> to make use of Kuryr’s existing IPAM functionality to request these IPs from<br>
>>>> Neutron.<br>
>>>><br>
>>>><br>
>>>><br>
>>>> Asks<br>
>>>><br>
>>>> We wish to discuss the pros and cons.<br>
>>>><br>
>>>> For example, containers exposure as proper neutron entities and the<br>
>>>> utility of neutron’s allowed-address-pairs is not yet well understood.<br>
>>>><br>
>>>><br>
>>>><br>
>>>> We also wish to understand if this approach is acceptable for kuryr?<br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> EPA<br>
>>>><br>
>>>> The Enhanced Platform Awareness initiative is a continuous program to<br>
>>>> enable fine-tuning of the platform for virtualized network functions.<br>
>>>><br>
>>>> This is done by exposing the processor and platform capabilities through<br>
>>>> the management and orchestration layers.<br>
>>>><br>
>>>> When a virtual network function is instantiated by an Enhanced Platform<br>
>>>> Awareness enabled orchestrator, the application requirements can be more<br>
>>>> efficiently matched with the platform capabilities.<br>
>>>><br>
>>>><br>
>>>> <a href="http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/" rel="noreferrer" target="_blank">http://itpeernetwork.intel.<wbr>com/openstack-kilo-release-is-<wbr>shaping-up-to-be-a-milestone-<wbr>for-enhanced-platform-<wbr>awareness/</a><br>
>>>><br>
>>>> <a href="https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf" rel="noreferrer" target="_blank">https://networkbuilders.intel.<wbr>com/docs/OpenStack_EPA.pdf</a><br>
>>>><br>
>>>><br>
>>>> <a href="https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo" rel="noreferrer" target="_blank">https://www.brighttalk.com/<wbr>webcast/12229/181563/epa-<wbr>features-in-openstack-kilo</a><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> Regards,<br>
>>>><br>
>>>> Ivan….<br>
>>>><br>
>>>> ------------------------------<wbr>------------------------------<wbr>--<br>
>>>> Intel Research and Development Ireland Limited<br>
>>>> Registered in Ireland<br>
>>>> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare<br>
>>>> Registered Number: 308263<br>
>>>><br>
>>>> This e-mail and any attachments may contain confidential material for<br>
>>>> the sole use of the intended recipient(s). Any review or distribution by<br>
>>>> others is strictly prohibited. If you are not the intended recipient, please<br>
>>>> contact the sender and delete all copies.<br>
>>>><br>
>>>><br>
>>>><br>
>>>> ______________________________<wbr>______________________________<wbr>______________<br>
>>>> OpenStack Development Mailing List (not for usage questions)<br>
>>>> Unsubscribe:<br>
>>>> <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
>>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
>>>><br>
>>><br>
>>><br>
>>><br>
>>> ______________________________<wbr>______________________________<wbr>______________<br>
>>> OpenStack Development Mailing List (not for usage questions)<br>
>>> Unsubscribe:<br>
>>> <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
>>><br>
>><br>
>><br>
>> ______________________________<wbr>______________________________<wbr>______________<br>
>> OpenStack Development Mailing List (not for usage questions)<br>
>> Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
>><br>
><br>
><br>
> ______________________________<wbr>______________________________<wbr>______________<br>
> OpenStack Development Mailing List (not for usage questions)<br>
> Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
><br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
</div></div></blockquote></div><br></div>