[Openstack] Openstack icehous one controller and two compute node
Lars Kellogg-Stedman
lars at redhat.com
Fri Feb 6 14:08:28 UTC 2015
On Thu, Feb 05, 2015 at 09:10:43AM +0100, Fiorenza Meini wrote:
> Thanks for your suggestion, my configuration of tenant_network_types is gre.
> Neutron has different component:
> neutron-dhcp-agent
> neutron-l3-agent
> neutron-metadata-agent
> neutron-plugin-openvswitch-agent
> neutron-server
>
> On my second node, I started only neutron-plugin-openvswitch-agent.
> Which is the virtual network point of union between two nodes ?
I'm not sure I understand your question...
Your compute nodes connect to your controller(s) via GRE tunnels,
which are set up by the Neutron openvswitch agent. In a typical
configuration, L3 routing happens on the network host, while the
compute hosts are stricly L2 environments.
The compute hosts only need the openvswitch agent, which in addition
to setting up the tunnels is also responsible managing port
assignments on the OVS integration bridge, br-int. All the other
logic happens on your controller(s), where all the other neutron
agents are running.
This is an old post of mine that talks about how things are connected
in a GRE (or VXLAN) environment:
http://blog.oddbit.com/2013/11/14/quantum-in-too-much-detail/
This post doesn't cover the new hotness that is Neutron DVR or HA
routers, but it's still a good starting point.
--
Lars Kellogg-Stedman <lars at redhat.com> | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack | http://blog.oddbit.com/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150206/f7f39298/attachment.sig>
More information about the Openstack
mailing list