[Openstack] nova-network vs neutron

amit gupta amit.gupta at ask.com
Fri May 30 22:10:48 UTC 2014


Hi Jeremy

Nova-network has basically 3 modes (Flat, FlatDHCP and VLAN) while 
neutron is designed to handle complex setups needing most flexibility. 
And your setup seems to be complex so I think it would be better of with 
Neutron networking.

Thanks,
Amit

On 05/30/2014 01:21 PM, Jeremy Utley wrote:
> Hello list,
>
> I'm working on a proof-of-concept openstack installation for my 
> company, and am looking for the answers to some questions that I 
> haven't been able to find the answer to.  Hopefully someone here can 
> help point me in the right direction!
>
> I've got a basic setup already in place - 2 controller nodes, 3 
> compute nodes (with 5 more ready to provision when we line everything 
> out).  The main problems I'm facing right now is in the networking.  
> We need to integrate this openstack setup with the network resources 
> already in place in our setup, and I'm not exactly sure how to do so.  
> I'll start off with basically going over how we are currently setup.
>
> 1. Management and compute nodes are all connected via infiniband 
> networking, we are using this as our cluster management network 
> (10.30.x.x/16).  This network has zero connectivity to anything else - 
> it's completely private for the cluster.
>
> 2. eth2 on all compute nodes are connected to each other via a vlan on 
> our switch, and bridged to br100 to be our openstack fixed network. 
> (10.18.x.x/16).
>
> 3. eth1 on all compute nodes is connected up to our existing colo 
> private network.  We are currently defining this as our external 
> network. (10.5.20.0/22 
> <https://urldefense.proofpoint.com/v1/url?u=http://10.5.20.0/22&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=%2FeM6hckXh%2FT8ZrfqIm26OhTyCt9D11gaZDh881YmkC0%3D%0A&s=d031a1f31c5f5feb90b90502c1812948448bf241c1e2a7d539311a5369eaca81>) 
> - In our current setup (with nova-network and the FlatDHCPManager), I 
> have taken a block of IP's from this subnet and reserved them for use 
> as floating IP's for testing purposes - this is working perfectly 
> right now.
>
> 4. eth0 on all compute nodes is connected to our existing colo public 
> network.  We have a /19 public allocation, broken up into numerous /24 
> and /25 segments to keep independent divisions of the company fully 
> segregated - each segment is a separate vlan on the public network 
> switches.  In what we currently have setup, we are not utilizing this.
>
> Ultimately, we'd like to have our cluster VM's connected to the fixed 
> network (on eth2), and treat both eth1 and eth0 as "public networks" 
> we can use floating IP's from.  All VM's should connect to eth1 and be 
> able to have floating IP's assigned from that network to them, and 
> they should be able to connect to a single tagged vlan on eth0 as well.
>
> From the reading I've done so far, I think what we are trying to do 
> might be too complicated for nova-network, since it depends on 
> defining a single interface as the public interface on the compute 
> nodes, and we might potentially have more.  Am I interpreting that 
> correctly, or could we maybe accomplish this with nova-network 
> (perhaps using the VLANManager mode?)
>
> If we have to switch to Neutron, can you run the neutron services on 
> each compute node?  We have concerns about scale if we have to 
> implement a separate network node, as we could easily end up 
> saturating a full gig-e interface with this cluster in the future.  
> Plus, the extra cost expenditure of dedicated network nodes could end 
> up cost-prohibitive in the early stages of deployment.
>
> Anyone got any suggestions for us?
>
> Thank you for your time,
>
> Jeremy Utley
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140530/a86e3e0f/attachment.html>


More information about the Openstack mailing list