[Openstack] Decoupling of Network and Compute services for the new Network Service design

Diego Parrilla SantamarĂ­a diego.parrilla.santamaria at gmail.com
Thu Feb 24 09:41:31 UTC 2011


I think we had this conversation before some weeks ago. From my perspective
I think networking services are normally not considered as first class
citizens of the 'Virtual Datacenter'. What Ishimoto-san describes is a
Virtual Switch. But networking services in the day-in day-out operations
include also DNS management, load balancers, firewalls, VPNs, netflow and
others. And this is the main reason to decouple all these services from the
Virtual Machine lifecycle: they are a lot of heterogenous network services
and some make sense tied to the VM but others make sense tied to the Virtual
Datacenter (let's call it the Openstack Project concept).

The scheduler should handle the network services tied to the VM, but most of
the network services are tied to a different kind of resource scheduler, the
Virtual Datacenter resources scheduler. This is the orchestrator we are
discussing in this thread.

So before adding new virtual resources I think we need some kind of new
Orchestrator/Resource scheduler that should handle dependencies between
resources (a netflow listener needs a virtual Port of a virtual Switch to be
allocated) and pluggable services. What I'm not sure about this kind of
orchestration components is if they implement fixed or dynamic workflows.
Fixed workflows reduce complexity a lot.

A long email and my poor english... hope you understand it!

-
Diego Parrilla
nubeblog.com | nubeblog at nubeblog.com | twitter.com/nubeblog
+34 649 94 43 29




On Wed, Feb 23, 2011 at 9:47 PM, John Purrier <john at openstack.org> wrote:

> And we are back to the discussion about orchestration... Given the
> flexibility of the OpenStack system and the goals of independently
> horizontally scaling services I think we will need to address this head on.
> #3 is the most difficult, but is also the right answer for the project as
> we
> look forward to adding functionality/services to the mix. This is also
> where
> we can make good use of asynchronous event publication interfaces within
> services to ensure maximum efficiency.
>
> John
>
> -----Original Message-----
> From: openstack-bounces+john=openstack.org at lists.launchpad.net
> [mailto:openstack-bounces+john=openstack.org at lists.launchpad.net] On
> Behalf
> Of Vishvananda Ishaya
> Sent: Wednesday, February 23, 2011 12:27 PM
> To: Ishimoto, Ryu
> Cc: openstack at lists.launchpad.net
> Subject: Re: [Openstack] Decoupling of Network and Compute services for the
> new Network Service design
>
> Agreed that this is the right way to go.
>
> We need some sort of supervisor to tell the network to allocate the network
> before dispatching a message to compute.  I see three possibilities (from
> easiest to hardest):
>
> 1. Make the call in /nova/compute/api.py (this code runs on the api host)
> 2. Make the call in the scheduler (the scheduler then becomes sort of a
> supervisor to make sure all setup occurs for a vm to launch)
> 3. Create a separate compute supervisor that is responsible for managing
> the
> calls to different components
>
> The easiest seems to be 1, but unfortunately it forces us to wait for the
> network allocation to finish before returning to the user which i dislike.
>
> I think ultimately 3 is probably the best solution, but for now I suggest 2
> as a middle ground between easy and best.
>
> Vish
>
> On Feb 23, 2011, at 5:29 AM, Ishimoto, Ryu wrote:
>
> >
> > Hi everyone,
> >
> > I have been following the discussion regarding the new 'pluggable'
> network
> service design, and wanted to drop in my 2 cents ;-)
> >
> > Looking at the current implementation of Nova, there seems to be a very
> strong coupling between compute and network services.  That is, tasks that
> are done by the network service are executed at the time of VM
> instantiation, making the compute code dependent on the network service,
> and
> vice versa.  This dependency seems undesirable to me as it adds
> restrictions
> to implementing 'pluggable' network services, which can vary, with many
> ways
> to implement them.
> >
> > Would anyone be opposed to completely separating out the network service
> logic from compute?  I don't think it's too difficult to accomplish this,
> but to do so, it will require that the network service tasks, such as IP
> allocation, be executed by the user prior to instantiating the VM.
> >
> > In the new network design(from what I've read up so far), there are
> concepts of vNICs, and vPorts, where vNICs are network interfaces that are
> associated with the VMs, and vPorts are logical ports that vNICs are
> plugged
> into for network connectivity.  If we are to decouple network and compute
> services, the steps required for FlatManager networking service would look
> something like:
> >
> > 1. Create ports for a network.  Each port is associated with an IP
> address
> in this particular case, since it's an IP-based network.
> > 2. Create a vNIC
> > 3. Plug a vNIC into an avaiable vPort.  In this case it just means
> mapping
> this vNIC to an unused IP address.
> > 4. Start a VM with this vNIC.  vNIC is already mapped to an IP address,
> so
> compute does not have to ask the network service to do any IP allocation.
> >
> > In this simple example, by removing the request for IP allocation from
> compute, the network service is no longer needed during the VM
> instantiation.  While it may require more steps for the network setup in
> more complex cases, it would still hold true that, once the vNIC and vPort
> are mapped, compute service would not require any network service during
> the
> VM instantiation.
> >
> > IF there is still a need for the compute to access the network service,
> there is another way.  Currently, the setup of the network
> environment(bridge, vlan, etc) is all done by the compute service. With the
> new network model, these tasks should either be separated out into a
> standalone service('network agent') or at least be separated out into
> modules with generic APIs that the network plugin providers can implement.
> By doing so, and if we can agree on a rule that the compute service must
> always go through the network agent to access the network service, we can
> still achieve the separation of compute from network services.   Network
> agents should have full access to the network service as they are both
> implemented by the same plugin provider.  Compute would not be aware of the
> network agent accessing the network service.
> >
> > With this design, the network service is only tied to the network REST
> API
> and the network agent, both of which are implemented by the plugin
> providers.  This would allow them to implement their network service
> without
> worrying about the details of the compute service.
> >
> > Please let me know if all this made any sense. :-)  Would love to get
> some
> feedbacks.
> >
> > Regards,
> > Ryu Ishimoto
> >
> > _______________________________________________
> > Mailing list: https://launchpad.net/~openstack
> > Post to     : openstack at lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20110224/9a2263b7/attachment.html>


More information about the Openstack mailing list