[openstack-dev] [neutron][qos] Egress minimum bandwidth assurance

Alonso Hernandez, Rodolfo rodolfo.alonso.hernandez at intel.com
Fri Jul 8 11:38:02 UTC 2016


Hello:

I’m working in the egress minimum bandwidth assurance qos policy rule.

I made a POC using the following architecture:

-          Every time a new rule is created and assigned to a port:

o   A new queue is created in OVS.

o   Create/update the qos policy (OVS database) in all other ports in br-int (assigned to the same vlan), br-tun and br-phy, updating the queue value.

o   Set the queue id to the incoming traffic to the OVS.

-          Every time rule is updated:

o   The values are updated in the ovs database.

-          Every time a rule is unassigned to a port:

o   The queue is removed from the qos policies. (OVS database)

o   The OVS rule to set the queue id is removed.

The problem is, based on the neutron extensions API, I can’t make any updates in other ports affected by this rule. That means, for example: if I create a new port, this port must have also a qos assigned (OVS database) using the existing queue. But because the Neutron qos rule doesn’t apply to this port, no qos agent function is called and this port is not correctly configured.

Any good idea to solve this problem?

Maybe the idea of creating veth ports could be easy (although the performance could be reduced).

Now I’m in a deadpoint.

Regards.


From: Alonso Hernandez, Rodolfo [mailto:rodolfo.alonso.hernandez at intel.com]
Sent: Friday, June 24, 2016 10:10 AM
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][qos] Egress minimum bandwidth assurance

Hello:

Ichihara, thank you for your answer. It was just a test to find out how to setup correctly the egress traffic shaping. I was facing this situation and I found the problem: I was using bridges with datapath_type = netdev, instead of system. That was the main problem. Now I can correctly apply a QoS and a queue, and assign a traffic to this queue.

To avoid using veth between bridges, I’m implementing the following solution:

-          Create a new queue for each min-qos rule applied to a port (the same min-qos rule could be applied to several ports, of course).

-          Because ovs only shapes traffic in the egress direction (ovs point of view):

o   Create a qos policy for each port in br-int in the same network of the port to apply the qos; then assign the created queue to this qos policy.

o   Create a qos policy for the external port in br-tun, and then assign the queue

o   Create a qos policy for the external port for the br-phy in the same network, and assign the queue

-          In br-int, table 0, enqueue all traffic going into ovs from the port with qos policy assigned to the queue created.

With this implementation, you don’t need to use veth and all traffic going from the port with the qos policy assigned to other VM or external port (physical bridge, tunnel) will be shaped. Of course, this implementation is a bit tangled, so please be gentle in the code-review.

Regards.


From: Hirofumi Ichihara [mailto:ichihara.hirofumi at lab.ntt.co.jp]
Sent: Tuesday, June 21, 2016 10:27 AM
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][qos] Egress minimum bandwidth assurance


On 2016/06/15 18:54, Alonso Hernandez, Rodolfo wrote:
Hello:

Context: try to develop a driver for this feature in OVS.

During the last week I’ve been testing several scenarios to make a POC of this feature.

Scenario 1:
3 VM connected to br-int, sending traffic through br-physical to other host (an external physical machine).
The first VM will have a min BW of 15 Mb. The physical port will be limited to 20 Mb, so for VM2 and VM3 should be only 5 Mb of available BW.
Those three VM are using iperf to inject traffic against the external host.

A) One qos policy and queue is created at VM1 port (with other_config:{min-rate=15000000}). The traffic is not shapped.
B) Another qos policy and queue with this minimum BW is created at int-phy-patchport. The traffic is not shapped.
C) Another qos policy and queue with this minimum BW is created at phy-int-phy-patchport. The traffic is not shapped.
D) Another qos policy and queue with this minimum BW is created at physical port. The traffic is still not shapped.
In OVS all traffic from VM1 is filtered to match the correct qos and queues at the ports.
It seems that this scenario doesn't expect some scenarios like DVR and multiple NIC. I thought that the queue should be set in br-int with veth(instead of patch port) between br-int and bt-tun. However, I guess that this may occur a issue that traffic cannot turn back in br-int. That may happen in Scenario2 case.

Therefore, I think that we should set the queue to physical port but we have a problem how do we specify the NIC in some cases(vlan, vxlan, DVR mode router and DVR FloatingIP).



Scenario 2:
Similar to scenario 1, but using a fourth VM to act as server. In this case, traffic only goes through br-int.
A) One qos policy and queue is created at VM1 port (with other_config:{min-rate=15000000}). The traffic is not shapped.
B) Another qos policy and queue with this minimum BW is created at output port (VM4 server port). The traffic is still not shapped.

I think we cannot manage this case because we doesn't know MAX bandwidth of traffic in br-int and the bandwidth is usually enough.
We should focus our attention on a case that the traffic goes out to other nodes.

Thanks,
Hirofumi

I need some help with this implementation, because I’m running out of time an ideas!

Thank you in advance.



__________________________________________________________________________

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160708/081b4ef1/attachment.html>


More information about the OpenStack-dev mailing list