[openstack-dev] [NFV] Specific example NFV use case - ETSI #5, virtual IMS
Calum Loudon
Calum.Loudon at metaswitch.com
Tue Jun 24 19:59:29 UTC 2014
Hello all
Following on from my contribution last week of a specific NFV use case
(a Session Border Controller) here's another one, this time for an IMS
core (part of ETSI NFV use case #5).
As we touched on at last week's meeting, this is not making claims for
what every example of a virtual IMS core would need, just as last week's
wasn't describing what every SBC would need. In particular, my IMS core
example is for an application that was designed to be cloud-native from
day one, so the apparent lack of OpenStack gaps is not surprising: other
IMS cores may need more. However, I think overall these two examples
are reasonably representative of the classes of data plane vs. control
plane apps.
Use case example
----------------
Project Clearwater, http://www.projectclearwater.org/. An open source
implementation of an IMS core designed to run in the cloud and be
massively scalable. It provides SIP-based call control for voice and
video as well as SIP-based messaging apps. As an IMS core it provides
P/I/S-CSCF function together with a BGCF and an HSS cache, and includes
a WebRTC gateway providing interworking between WebRTC & SIP clients.
Characteristics relevant to NFV/OpenStack
-----------------------------------------
Mainly a compute application: modest demands on storage and networking.
Fully HA, with no SPOFs and service continuity over software and hardware
failures; must be able to offer SLAs.
Elastically scalable by adding/removing instances under the control of the
NFV orchestrator.
Requirements and mapping to blueprints
--------------------------------------
Compute application:
- OpenStack already provides everything needed; in particular, there are
no requirements for an accelerated data plane, nor for core pinning
nor NUMA
HA:
- implemented as a series of N+k compute pools; meeting a given SLA
requires being able to limit the impact of a single host failure
- we believe there is a scheduler gap here; affinity/anti-affinity
can be expressed pair-wise between VMs, but this needs a concept
equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator
to assign each VM in a pool to one of X buckets, and requesting
OpenStack to ensure no single host failure can affect more than one
bucket (there are other approaches which achieve the same end e.g.
defining a group where the scheduler ensures every pair of VMs
within that group are not instantiated on the same host)
- if anyone is aware of any blueprints that would address this please
insert them here
Elastic scaling:
- similarly readily achievable using existing features - no gap.
regards
Calum
Calum Loudon
Director, Architecture
+44 (0)208 366 1177
METASWITCH NETWORKS
THE BRAINS OF THE NEW GLOBAL NETWORK
www.metaswitch.com
More information about the OpenStack-dev
mailing list