[Openstack] Network Service for L2/L3 Network Infrastructure blueprint

Vishvananda Ishaya vishvananda at gmail.com
Mon Jan 31 18:50:27 UTC 2011


+1

On Jan 31, 2011, at 10:40 AM, John Purrier wrote:

> In order to bring this discussion to a close and get everyone on the same page for Cactus development, here is where we have landed:
>  
> 1.       We will *not* be separating the network and volume controllers and API servers from the Nova project.
>  
> 2.       On-going work to extend the Nova capabilities in these areas will be done within the existing project and be based on extending the existing implementation. The folks working on these projects will determine the best approach for code re-use, extending functionality, and potential integration of additional community contributions in each area.
>  
> 3.       Like all efforts for Cactus, correct trade-offs must be made to maintain deployability, stability, and reliability (key themes of the release).
>  
> 4.       Core design concepts allowing each service to horizontally scale independently, present public/management/event interfaces through a documented OpenStack API, and allow services to be deployed independently of each other must be maintained. If issues arise that do not allow the current code structure to support these concepts the teams should raise the issues and open discussions on how to best address.
>  
> We will target the Diablo design summit to discuss and review the progress made on these services and determine if the best approach to the project has been made.
>  
> Thoughts?
>  
> John
>  
> From: Andy Smith [mailto:andyster at gmail.com] 
> Sent: Friday, January 28, 2011 4:06 PM
> To: John Purrier
> Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; openstack at lists.launchpad.net
> Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint
>  
>  
> 
> On Fri, Jan 28, 2011 at 1:19 PM, John Purrier <john at openstack.org> wrote:
> Thanks for the response, Andy. I think we actually agree on this J.
>  
> You said:
>  
> This statement is invalid, nova is already broken into services, each of which can be dealt with individually and scaled as such, whether the code is part of the same repository has little bearing on that. The goals of scaling are orthogonal to the location of the code and are much more related to separation of concerns in the code, making sure that volume code does not rely on compute code for example (which at this point it doesn't particularly).
>  
> The fact that the volume code and the compute code are not coupled make the separation easy. One factor that I did not mention is that each service will present public, management, and optional extension APIs, allowing each service to be deployed independently.
>  
> So far that is all possible under the existing auspices of Nova. DirectAPI will happily sit in front of any of the services independently, any of the services when run can be configured with different instances of RabbitMQ to point at, DirectAPI supports a large amount of extensibility and pluggable managers/drivers support a bunch more.
>  
> Decoupling of the code has always been a goal, as have been providing public, management, and extension APIs and we aren't doing so bad.
>  
> I don't think we disagree about wanting to run things independently, but for the moment I have seen no convincing arguments for separating the codebase.
>  
>  
>  
> You said:
>  
> That suggestion is contradictory, first you say not to separate then you suggest creating separate projects. I am against creating separate projects, the development is part of Nova until at least Cactus.
>  
> This is exactly my suggestion below. Keep Nova monolithic until Cactus, then integrate the new services once Cactus is shipped. There is work to be done to create the service frameworks, API engines, extension mechanisms, and porting the existing functionality. All of this can be done in parallel to the stability work being done in the Nova code base. As far as I know there are not major updates coming in either the volume or network management code for this milestone.
>  
> Where is this parallel work being done if not in a separate project?
>  
> --andy
>  
>  
>  
> John
>  
> From: Andy Smith [mailto:andyster at gmail.com] 
> Sent: Friday, January 28, 2011 12:45 PM
> To: John Purrier
> Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; openstack at lists.launchpad.net
> 
> Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint
>  
>  
> 
> On Fri, Jan 28, 2011 at 10:18 AM, John Purrier <john at openstack.org> wrote:
> Some clarification and a suggestion regarding Nova and the two new proposed services (Network/Volume).
> 
> To be clear, Nova today contains both volume and network services. We can specify, attach, and manage block devices and also specify network related items, such as IP assignment and VLAN creation. I have heard there is some confusion on this, since we started talking about creating OpenStack services around these areas that will be separate from the cloud controller (Nova).
> 
> The driving factors to consider creating independent services for VM, Images, Network, and Volumes are 1) To allow deployment scenarios that may be scoped to a single service, so that we don't drag all of the Nova code in if we just want to deploy virtual volumes, and 2) To allow greater innovation and community contribution to the individual services.
> 
> Another nice effect of separation of services is that each service can scale horizontally per the demands of the deployment, independent of the other services.
>  
> This statement is invalid, nova is already broken into services, each of which can be dealt with individually and scaled as such, whether the code is part of the same repository has little bearing on that. The goals of scaling are orthogonal to the location of the code and are much more related to separation of concerns in the code, making sure that volume code does not rely on compute code for example (which at this point it doesn't particularly).
>  
> 
> We have an existing blueprint discussing the Network Service. We have *not* published a blueprint discussing the Volume Service, this will be coming soon.
> 
> The net is that creating the correct architecture in OpenStack Compute (automation and infrastructure) is a good thing as we look to the future evolution of the project.
> 
> Here is the suggestion. It is clear from the response on the list that refactoring Nova in the Cactus timeframe will be too risky, particularly as we are focusing Cactus on Stability, Reliability, and Deployability (along with a complete OpenStack API). For Cactus we should leave the network and volume services alone in Nova to minimize destabilizing the code base. In parallel, we can initiate the Network and Volume Service projects in Launchpad and allow the teams that form around these efforts to move in parallel, perhaps seeding their projects from the existing Nova code.
> 
>  
> That suggestion is contradictory, first you say not to separate then you suggest creating separate projects. I am against creating separate projects, the development is part of Nova until at least Cactus.
>  
> Once we complete Cactus we can have discussions at the Diablo DS about progress these efforts have made and how best to move forward with Nova integration and determine release targets.
> 
> Thoughts?
> 
> John
> 
> -----Original Message-----
> From: openstack-bounces+john=openstack.org at lists.launchpad.net [mailto:openstack-bounces+john=openstack.org at lists.launchpad.net] On Behalf Of Rick Clark
> Sent: Friday, January 28, 2011 9:06 AM
> To: Jay Pipes
> Cc: Ewan Mellor; Søren Hansen; openstack at lists.launchpad.net
> Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint
> 
> On 01/28/2011 08:55 AM, Jay Pipes wrote:
> > On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark <rick at openstack.org> wrote:
> > I recognise the desire to do this for Cactus, but I feel that pulling
> > out the network controller (and/or volume controller) into their own
> > separate OpenStack subprojects is not a good idea for Cactus.  Looking
> > at the (dozens of) blueprints slated for Cactus, doing this kind of
> > major rework will mean that most (if not all) of those blueprints will
> > have to be delayed while this pulling out of code occurs. This will
> > definitely jeopardise the Cactus release.
> >
> > My vote is to delay this at a minimum to the Diablo release.
> >
> > And, for the record, I haven't seen any blueprints for the network as
> > a service or volume as a service projects. Can someone point us to
> > them?
> >
> > Thanks!
> > jay
> 
> Whew, Jay I thought you were advocating major changes in Cactus.  That would completely mess up my view of the world :)
> 
> https://blueprints.launchpad.net/nova/+spec/bexar-network-service
> https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
> https://blueprints.launchpad.net/nova/+spec/bexar-network-service
> 
> 
> It was discussed at ODS, but I have not seen any code or momentum, to date.
> 
> I think it is worth while to have an open discussion about what if any of this can be safely done in Cactus.  I like you, Jay, feel a bit conservative.  I think we lost focus of the reason we chose time based releases. It is time to focus on nova being a solid trustworthy platform.  Features land when they are of sufficient quality, releases contain only the features that passed muster.
> 
> I will be sending an email about the focus and theme of Cactus in a little while.
> 
> Rick
> 
> 
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>  
>  
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20110131/74381cd7/attachment.html>


More information about the Openstack mailing list