[openstack-dev] [Neutron] Re: Service VM discussion - mgmt ifs

Regnier, Greg J greg.j.regnier at intel.com
Wed Oct 2 21:46:11 UTC 2013


Hi
This thread contains multiple topics that we may want to split out...

Re: the service VM management & management interface(s):

It has been pointed out that there are multiple possible models,

1)      In-band mgmt. interface using the data network,  but not desirable because it is in the same failure domain

2)       separate management network and service VM network interface

3)       virtio unix channel socket see: Nova/appliance-communication-channel blueprint

4)      The framework should support both 2 and 3

                The case for (3) is that it is a similar solution to VMWare solution, so well understood, and potentially less complex.

Soliciting feedback:
                What are other arguments in favor/against 2) and 3) ?
                What about migration?
                What about security?
                Other considerations?


-          Greg


From: Sumit Naiksatam [mailto:sumitnaiksatam at gmail.com]
Sent: Tuesday, October 01, 2013 11:50 AM
To: OpenStack Development Mailing List
Cc: Regnier, Greg J
Subject: [Neutron] Re: Service VM discussion - mgmt ifs

Adding openstack-dev mailer as this discussion is gathering good momentum.

This thread was started in the context of the blueprint (https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms) and the initial face-to-face discussions we had with a number of people on this thread.


Thanks,
~Sumit.


On Sat, Sep 28, 2013 at 10:41 AM, P Balaji-B37839 <B37839 at freescale.com<mailto:B37839 at freescale.com>> wrote:

Thanks Greg for pulling into this thread.



Good that we are seeing converging thoughts and discussions.



Please find inline comments.



Regards,

Balaji.P



From: Ravi Chunduru [mailto:ravivsn at gmail.com<mailto:ravivsn at gmail.com>]
Sent: Saturday, September 28, 2013 12:43 AM
To: Bob Melander (bmelande)
Cc: Geoff Arnold; Regnier, Greg J; Cummings, Gregory D; David Chang (dwchang); Elzur, Uri; Gary Duan; Joseph Swaminathan; Kanzhe Jiang; Kuang-Ching Wang; Kyle Mestery (kmestery); Maciocco, Christian; Marc Benoit; P Balaji-B37839; Rajesh Mohan; Rudrajit Tapadar (rtapadar); Sridar Kandaswamy (skandasw); Sumit Naiksatam; Yi Sun
Subject: Re: Service VM discussion - mgmt ifs



I am glad we are having a good discussion here. Please see my comments in line.



Thanks,

-Ravi.



On Fri, Sep 27, 2013 at 11:11 AM, Bob Melander (bmelande) <bmelande at cisco.com<mailto:bmelande at cisco.com>> wrote:

Good points Ravi. Some comments from me inline.

Thanks,

Bob

27 sep 2013 kl. 19:31 skrev "Ravi Chunduru" <ravivsn at gmail.com<mailto:ravivsn at gmail.com>>:

Hope I am not too late on this thread. I have definitely missed earlier discussions. So some of my thoughts would have been already discussed.



 I want to write down my thoughts and be on same page with you all.



1) Service VM management Network:

Management interface for the service VM must be always reachable. It means even if the virtual networks are blogged, this interface must be reachable to plugin.



It means if the management interface should be from management network. Does it mean we should have a bridge network type for management network?

I suggest going with approach like in Vmware solution. The hypervisor opens up serial port with each guest appliance. Host sees it as a unix socket and guest sees it as a serial port.  This way reaching management network is reliable and not collide with data traffic. Thoughts?



I think this a nice feature to provide. I think the framework should provide both options, serial port and network interface for management since it cannot be guaranteed all appliances will support serial port access.



Ravi> I completely agree with this.

Management network interface can be used provided there is no service VM migration.

[P Balaji-B37839] We used Virtio-Serial port on KVM hypervisor for propagation of configuration from OS-Controller to Compute Node in our Network Service Chain PoC implementation. Discussing with Daniel [Nova - Libvirt Lead] to make it more generic and secure. As Virtio-Serial port will give access to Host from Guest VM, We should think of Host security. Right now Apparmor for Libvirt is taking care of it. We are working on BP for this.



2) Data interfaces

It is always possible for user to add/delete networks from the services. For example, user can decide to have fwaas for a given virtual network or delete.

The problem becomes complex when a service VM needs to listen on IP from a subnet. Say any L3 services like SSL, Load balancer needs to listen on IP address. These IPs can change and sometimes comes from various networks.



Say a loadbalancer has a VIP coming from Network 1, and deleted a VIP from Network 2. It means we need to hotplug a port from Network 1 and delete a port from Network 2 on the service VM.



If there are 100 networks, then it needs 100 ports? I am fine with it but Should we not think about a single port with IPs coming from all the configured networks?



In service VM work we have been doing we have been making use of VLAN trunking which is one way of dealing with this. The service VM has a single interface (or a small number of them) for data traffic. When attached to a particular tenant network, that network is trunked on the port used by the service VM. Benefits of this is this is that no vif hot plugging is needed and few VIFs are needed (I think PCI will limit the number VIFs possible to something much less than 100) Downside is that the plugin needs to support VIF trunking, which afaik only the Cisco N1kv plugin supports. There is a blueprint by Kyle Mestery to add VLAN trunking to more plugins. ML2 would be a good next candidate. I think the framework should aim to support a few different "attachment" methods, hot-plugging being one.



Ravi> That is a good solution.

Service VM plugin analyzes the config and can make use of this to add or remove network from multi segment domain.





3) Multi tenancy support

If a service VM is by provider, it definitely should have multi tenancy support. Its quite possible for user to go with shared service VM for cost cut downs. In such cases the how does service VM should address overlapped networks. Unless service VMs implement tenant isolation in its implementation its a limitation.



Agree completely about the benefit of multi-tenancy. Likely this will involve creating something like virtual contexts inside the service VM and that will be appliance dependent. Therefore it is less clear to me how much of this can be provided by the framework.

Ravi> We may need to provide reference implementation. Hence asked.

[P Balaji-B37839]  Tenants will be having multiple configurations based on their deployment. For a Service VM shared by provider applying tenant specific configuration by creating virtual context is good.

Frame work should address the Sharing of Service VM by Provider to multiple tenants and tenant specific configuration mapping.







4) Service VM functionality

Seeing big picture, a service VM will be replaced with Vendor appliances replacing the reference implementations say HAproxy or iptables firewall etc.,  Following this Blueprint , obvious next target is service insertion/chaining etc., Going by this direction - we should consider a service VM as container of multiple services.

- Service VM must define set of services it can service

- Service VM must have set of capabilities per service for Service Scheduler to take an effective decision

- Future design of service insertion must be aware of the fact that a service VM has multiple services and utilize one or more services with out leaving the VM( greatly helps in improving latency)

With this in consideration I wanted to name it as appliance and each service as  a network function. But I understand it draws up big discussions :)



The way I interpret your text above we have a similar abstraction but in our case we call it a "hosting entity". A hosting entity hosts service instances (in our case we only have one type of service so far - routing) that materializes the logical resources of the service (routers in our case).



We do scheduling of logical service resources across hosting entities and if need be spin up new VMs.

Ravi> Right, we need a good scheduling logic that takes on attributes what can a service VM support and decide on selection.

[P Balaji-B37839] Correct Ravi, I think this frame work should address generic scheduling so that the appliance vendors can plug-in their Service VM Attributes.







5) High availability

In any realistic deployments no user would want a bottleneck for a service with only one service VM.
We require redundancy in the solution for real deployment to happen.



When it comes to HA and bottlenecks I would also like to being into the picture the task of making configurations inside the service VMs. This can be done using a variety of protocols/mechanisms and can be quite heavy in the sense of requiring many roundtrips. We've found it to be useful if that can be offloaded and distributed. In our case we have special configuration agents that can run in arbitrary numbers and which take on configuration tasks. The agent is very similar to the l3agent with one difference being that it applies configurations on remote appliances. Personally I would like if the l3agent would be evolved to do this





6) Service VM integration with APIs

I have some thoughts on it. First I would like to know thoughts from you all on the above points.



Thanks,

-Ravi.



(For those whom I am interacting first... a bit Intro: I am Ravi Chunduru, I worked on firewalls, Alteon Load balancers development and currently with PayPal working in Cloud Engineering team)

















On Thu, Sep 26, 2013 at 5:56 PM, Geoff Arnold <arnoldg at brocade.com<mailto:arnoldg at brocade.com>> wrote:

Good discussion.



When I drafted the DNRM blueprint, I made the pessimistic assumption that the variety of VIF plugging patterns, and the number of ways in which services instances might need to adapt to different network environments, were going to be so great that it would be premature to try to capture VIF plugging into a single scheme driven by some kind of service metadata. I based this on conversations with developers of various virtual appliances in the L3-L7 space, and the kinds of adaptations they had been forced to make to get their products working with different IaaS providers. So I decided to package a service as (effectively) a triple:

  *   a Glance image for the service
  *   a provisioning policy (including Nova parameterization and lifecycle)
  *   a plugin that handled the initialization of a newly-activated service instance



If a single metadata-driven scheme for VIF plugging can be made to work, that would be great. (And DNRM could accommodate it using a generic plugin.) I am skeptical, however.



The comment about extending "beyond VMs and also support physical devices" is exactly right, of course. One of the DNRM use cases is policy-based selection of virtual or physical LB instances. (We have a customer waiting...)



Geoff



From: <Regnier>, "greg.j.regnier at intel.com<mailto:greg.j.regnier at intel.com>" <greg.j.regnier at intel.com<mailto:greg.j.regnier at intel.com>>
Date: Thursday, September 26, 2013 at 5:19 PM
To: "Bob Melander (bmelande)" <bmelande at cisco.com<mailto:bmelande at cisco.com>>
Cc: "Bob Melander (bmelande" <bmelande at cisco.com<mailto:bmelande at cisco.com>>, "Cummings, Gregory D" <gregory.d.cummings at intel.com<mailto:gregory.d.cummings at intel.com>>, "David Chang (dwchang" <dwchang at cisco.com<mailto:dwchang at cisco.com>>, "Elzur, Uri" <uri.elzur at intel.com<mailto:uri.elzur at intel.com>>, "gduan at varmour.com<mailto:gduan at varmour.com>" <gduan at varmour.com<mailto:gduan at varmour.com>>, Joseph Swaminathan <joeswaminathan at gmail.com<mailto:joeswaminathan at gmail.com>>, Kanzhe Jiang <kanzhe.jiang at bigswitch.com<mailto:kanzhe.jiang at bigswitch.com>>, Kuang-Ching Wang <kc.wang at bigswitch.com<mailto:kc.wang at bigswitch.com>>, "kmestery at cisco.com<mailto:kmestery at cisco.com>" <kmestery at cisco.com<mailto:kmestery at cisco.com>>, "Maciocco, Christian" <christian.maciocco at intel.com<mailto:christian.maciocco at intel.com>>, Marc Benoit <mbenoit at paloaltonetworks.com<mailto:mbenoit at paloaltonetworks.com>>, P Balaji <B37839 at freescale.com<mailto:B37839 at freescale.com>>, Rajesh Mohan <rajesh.mlists at gmail.com<mailto:rajesh.mlists at gmail.com>>, "ravivsn at gmail.com<mailto:ravivsn at gmail.com>" <ravivsn at gmail.com<mailto:ravivsn at gmail.com>>, "Rudrajit Tapadar (rtapadar" <rtapadar at cisco.com<mailto:rtapadar at cisco.com>>, Geoff Arnold <arnoldg at brocade.com<mailto:arnoldg at brocade.com>>, "Sridar Kandaswamy (skandasw" <skandasw at cisco.com<mailto:skandasw at cisco.com>>, "sumitnaiksatam at gmail.com<mailto:sumitnaiksatam at gmail.com>" <sumitnaiksatam at gmail.com<mailto:sumitnaiksatam at gmail.com>>, Yi Sun <yisun at varmour.com<mailto:yisun at varmour.com>>
Subject: RE: Service VM discussion - mgmt ifs



+ GregC, RaviC, Balaji.P, GeoffA

- DanF



Thanks Bob, good input,



Given that a service VM can host multiple logical services, I agree that the scheduling function you describe will be required.

Also mapping of the logical service interfaces to VIFs...



As Yi pointed out, another aspect is to define the model for how to plug VIFs into the network(s) to build the desired logical topology.

It will be good to work up some use cases, e.g. different insertion modes, single network, spanning networks,...



Regards,

                Greg







From: Bob Melander (bmelande) [mailto:bmelande at cisco.com]
Sent: Wednesday, September 25, 2013 2:57 PM
To: Regnier, Greg J; Sumit Naiksatam; Rudrajit Tapadar (rtapadar); David Chang (dwchang); Joseph Swaminathan; Elzur, Uri; Marc Benoit; Sridar Kandaswamy (skandasw); Dan Florea (dflorea); Kanzhe Jiang; Kuang-Ching Wang; Gary Duan; Yi Sun; Rajesh Mohan; Maciocco, Christian; Kyle Mestery (kmestery)
Subject: Re: Service VM discussion - mgmt ifs



Hi all,



Just so first comments from me.



A service typically defines logical resources. Such a resource will be attached to

networks through ports. During the lifetime of the resource, it may later be detached

from some networks. Neutron routers are one concrete example. It is also possible that

multiple service resources are hosted by the same VM. This is analogous to how multiple

Neutron routers can be hosted by a network node (to continue the routing service example).



Based on this it will be good if the framework includes its own scheduling component.

Its responsibility is to select an existing VM to host a logical resource or to signal

that a new VM needs to be created (in the case no running VMs are suitable).

Note that the scheduling component of the service VM framework is different from Nova's

scheduler. The latter takes care of scheduling of service VMs across compute hosts,

whereas the former operate on the logical server resources and the entities to host them

(the VMs)



The service VM framework scheduler should preferably also allow selection of VIFs to host

a logical resource's logical interfaces. To clarify the last statement, one "use case"

could be to spin up a VM with more VIFs than are needed initially (e.g., if the VM does

not support vif hot-plugging). Another "use case" is if the plugin supports VLAN trunking

and attachement of the logical resource's logical interface to a network corresponds to

trunking of a network on a VIF.



There are at least three (or four) ways to dynamically plug a logical service resource

inside a VM to networks:

- Create a VM VIF on demand for the logical interface of the service resource

("hot-plugging")

- Pre-populate the VM with a set of VIFs that can be allocated to logical interfaces of

the service resources

- Create a set of VM VIFs (on demand or during VM creation) that carry VLAN trunks for

which logical (VLAN) interfaces are created and allocated to service resources.



When the service VM framework has its own scheduling component it is fairly easy to extend

it to go beyond VMs and also support physical devices. VMs and physical devices are all

entities that can host logical resources of Neutron services.



Thanks,

Bob



From: <Regnier>, Greg J <greg.j.regnier at intel.com<mailto:greg.j.regnier at intel.com>>
Date: onsdag 25 september 2013 17:09
To: Sumit Naiksatam <sumitnaiksatam at gmail.com<mailto:sumitnaiksatam at gmail.com>>, "Rudrajit Tapadar (rtapadar)" <rtapadar at cisco.com<mailto:rtapadar at cisco.com>>, "David Chang (dwchang)" <dwchang at cisco.com<mailto:dwchang at cisco.com>>, Joseph Swaminathan <joeswaminathan at gmail.com<mailto:joeswaminathan at gmail.com>>, "Elzur, Uri" <uri.elzur at intel.com<mailto:uri.elzur at intel.com>>, Marc Benoit <mbenoit at paloaltonetworks.com<mailto:mbenoit at paloaltonetworks.com>>, "Sridar Kandaswamy (skandasw)" <skandasw at cisco.com<mailto:skandasw at cisco.com>>, Dan Florea <dflorea at cisco.com<mailto:dflorea at cisco.com>>, Kanzhe Jiang <kanzhe.jiang at bigswitch.com<mailto:kanzhe.jiang at bigswitch.com>>, Kuang-Ching Wang <kc.wang at bigswitch.com<mailto:kc.wang at bigswitch.com>>, Gary Duan <gduan at varmour.com<mailto:gduan at varmour.com>>, Yi Sun <yisun at varmour.com<mailto:yisun at varmour.com>>, Rajesh Mohan <rajesh.mlists at gmail.com<mailto:rajesh.mlists at gmail.com>>, "Maciocco, Christian" <christian.maciocco at intel.com<mailto:christian.maciocco at intel.com>>, Kyle Mestery <kmestery at cisco.com<mailto:kmestery at cisco.com>>, Bob Melander <bmelande at cisco.com<mailto:bmelande at cisco.com>>
Cc: "Regnier, Greg J" <greg.j.regnier at intel.com<mailto:greg.j.regnier at intel.com>>
Subject: Service VM discussion - mgmt ifs



Hi all,



At the face-to-face meeting we had a discussion about the requirements for service VM management.

My takeaway was this:



1)      There is a requirement for a management interface for the VM "service VM management port" that was used to manage the VM resource, for example a health monitor in the event a VIF quietly went down.  This management interface could be common across services.



2)      There is also a requirement for a "service management port"  that is used to configure and update the policy/resources of the service, and this interface is service specific.  For example adding a rule to a firewall.



Seeking input as to:



1)       Does the above description match the discussion at the meeting?

2)      What are the set of functions needed/desired for the "service VM management" interface?

3)      Other general feedback



Thanks,

                Greg



ps.  Feel free to send your feedback to this list, or comment directly on the blueprint.





--
Ravi





--
Ravi


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131002/2c42fe9e/attachment-0001.html>


More information about the OpenStack-dev mailing list