[openstack-dev] [ceilometer][networking-sfc] Meters/Statistics for Networking-SFC

rajeev.satyanarayana at wipro.com rajeev.satyanarayana at wipro.com
Thu Jul 27 06:00:19 UTC 2017


Hi Igor/Cathy/Gord,

Sorry for the delay in replying.

As part of monitoring the SFC, I think maintaining the details of "Number of flows assigned to a given SFC", "Number of packets/bytes dropped/hit due to the policies at each Service Function entry/exit points or for the entire SFC" would be good to start with. I feel based on some of these details, an operator can know how much the virtual switch is loaded and take a call on whether to add a new SFC, or it could even help during debugging which service function has exactly caused the disruption to SFC.
As a first step, I think it would be good to add meters to provide "number of packets/bytes hit due to policy at the ingress and egress of the entire SFC" and to realize that, we would need to add specific pollsters. We can use neutron_client API to fetch the ingress and egress port details and fetch the corresponding flows for those specific ports(also get flow_infos from flow_classifier and then use them to dump_flows matching those flow_infos) and fetch the no.of packets hit due to policy from them.

I have just mentioned my idea on this. Can I please know your opinion on this and provide your comments?

Thanks and Regards,
Rajeev.
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170727/3a56b317/attachment.html>


More information about the OpenStack-dev mailing list