[openstack-dev] [NFV] Specific example NFV use case for a data plane app
sgordon at redhat.com
Mon Jun 16 19:59:23 UTC 2014
----- Original Message -----
> From: "Calum Loudon" <Calum.Loudon at metaswitch.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Hello all
> At Wednesday's meeting I promised to supply specific examples to help
> illustrate the NFV use cases and also show how they map to some of the
> blueprints. Here's my first example - info on our session border
> controller, which is a data plane app. Please let me know if this is
> the sort of example and detail the group are looking for, then I can
> add it into the wiki and send out info on the second, a vIMS core.
> Use case example
> Perimeta Session Border Controller, Metaswitch Networks. Sits on the
> edge of a service provider's network and polices SIP and RTP (i.e. VoIP)
> control and media traffic passing over the access network between
> end-users and the core network or the trunk network between the core and
> another SP.
> Characteristics relevant to NFV/OpenStack
> Fast & guaranteed performance:
> - fast = performance of order of several million VoIP packets (~64-220
> bytes depending on codec) per second per core (achievable on COTS hardware)
> - guaranteed via SLAs.
> Fully HA, with no SPOFs and service continuity over software and hardware
> Elastically scalable by adding/removing instances under the control of the
> NFV orchestrator.
> Ideally, ability to separate traffic from different customers via VLANs.
> Requirements and mapping to blueprints
> Fast & guaranteed performance - implications for network:
> - the packets per second target -> either SR-IOV or an accelerated
> DPDK-like data plane
> - maps to the SR-IOV and accelerated vSwitch blueprints:
> - "SR-IOV Networking Support"
> - "Open vSwitch to use patch ports"
> - "userspace vhost in ovd vif bindings"
> - "Snabb NFV driver"
> - "VIF_SNABB" (https://blueprints.launchpad.net/nova/+spec/vif-snabb)
> Fast & guaranteed performance - implications for compute:
> - to optimize data rate we need to keep all working data in L3 cache
> -> need to be able to pin cores
> - "Virt driver pinning guest vCPUs to host pCPUs"
> - similarly to optimize data rate need to bind to NIC on host CPU's bus
> - "I/O (PCIe) Based NUMA Scheduling"
> - to offer guaranteed performance as opposed to 'best efforts' we need
> to control placement of cores, minimise TLB misses and get accurate
> info about core topology (threads vs. hyperthreads etc.); maps to the
> remaining blueprints on NUMA & vCPU topology:
> - "Virt driver guest vCPU topology configuration"
> - "Virt driver guest NUMA node placement & topology"
> - "Virt driver large page allocation for guest RAM"
> - may need support to prevent 'noisy neighbours' stealing L3 cache -
> unproven, and no blueprint we're aware of.
> - requires anti-affinity rules to prevent active/passive being
> instantiated on same host - already supported, so no gap.
> Elastic scaling:
> - similarly readily achievable using existing features - no gap.
> VLAN trunking:
> - maps straightforwardly to "VLAN trunking networks for NFV"
> (https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al).
> - being able to offer apparent traffic separation (e.g. service
> traffic vs. application management) over single network is also
> useful in some cases
> - "Support two interfaces from one VM attached to the same network"
Thanks for contributing this, I think as a concrete example it's very helpful and I like the breakdown. I've taken the liberty of adding it to the Wiki for further editing/discussion (it appears Chris has not split the pages as yet so for now on the meetings page):
The content itself seems fine, I think however for brevity - particularly as we add more - we need to come up with a way to condense the amount of space taken up by the requirements list. I notice for the ETSI use cases a column was added to the blueprints table to track how they aligned, I am wondering if the same column should be reused/abused for the user contributed use cases such as the one you provided or we should add another column?
More information about the OpenStack-dev