[Openstack-operators] Small openstack

Kris G. Lindgren klindgren at godaddy.com
Mon Dec 22 22:26:55 UTC 2014


This is pretty much how we are using neutron.

We have multiple shared provider networks  that have "real" ip's, backed by real vlans,  using a real network device as the gateway.  The are only two drawbacks to this approach (that I have found).  The first is that floating ip's wont work (unless the gateway device happens to do a firewall and you are doing natting there, we aren't).  We are currently in the process of changing floating ip's to do routed ip address (which means the routed ip also need to be bound to a non-arping interface in the vm).  The second depends on your network design and the fact that neutron defines a network at layer2 and assumes that that network is available on any compute node.

Since our networking is a folded clos design - layer3 is terminated at the access switches so layer2 only exists on an access pair.  We have modified both neutron and nova to support the ability of metadata placed on compute nodes (actually I think its a host aggregate) to target which network a vm will live on (along with adding a network scheduler to both neutron and nova).  This was needed because neutron, as I mentioned earlier, defines a network as  layer2 segment *and* assumes that layer2 segment is available anywhere in the cloud.  Under our implementation that is not true, a particular layer2 segment is only available to servers directly attached to that access switch.  So without these changes you have the possibility of boot of a vm on a compute node, on a network that the compute node is not attached to and can never be attached to it without the node being moved.  The scheduler additions that we did also make it possible for you to boot a vm without specifying a network and it will "just work".

Either way, from experience the solution that you have chosen, with a simpler more traditional network design, can/will scale out well beyond the 3-5 compute nodes you are talking about, without any changes to neutron/nova.
____________________________________________

Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.


From: matt <matt at nycresistor.com<mailto:matt at nycresistor.com>>
Date: Monday, December 22, 2014 at 2:46 PM
To: George Shuklin <george.shuklin at gmail.com<mailto:george.shuklin at gmail.com>>
Cc: "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>" <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: Re: [Openstack-operators] Small openstack

Sounds like a solid way to approach it george.  I hope you can document and share your methods and experiences.

Sounds like this would be helpful to folks setting up small test environments.

On Mon, Dec 22, 2014 at 4:35 PM, George Shuklin <george.shuklin at gmail.com<mailto:george.shuklin at gmail.com>> wrote:
Thank you for everyone!

After some lurking around I found rather unusual way: use external networks on per-tennant based with directly attached interfaces. This will not only eliminate neutron nodes (as heavy server), but will remove NAT and simplify everything for tenant. All we need just a some VLAN/VXLANs with few external networks (per tenant).

Tenants will have no 'routers' and 'floatingips', but still will have DHCP and other yummy neutron things like private networks with overlapping numbering plans.

Future reports follow.


On 12/21/2014 12:16 AM, George Shuklin wrote:
Hello.

I've suddenly got request for small installation of openstack (about 3-5 computes).

They need almost nothing (just a management panel to span simple instances, few friendly tennants), and I curious, is nova-network good solution for this? They don't want network node and do 'network node on compute' is kinda sad.

(And one more: did anyone tried to put management stuff on compute node in mild production?)


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20141222/17d71bfa/attachment.html>


More information about the OpenStack-operators mailing list