[Openstack] Installing Openstack Liberty with Openvswitch support

Jose Manuel Ferrer Mosteiro jmferrer.paradigmatecnologico at gmail.com
Mon Jul 18 10:24:05 UTC 2016


 

I had the same problem and I solved it creating bridges . 

I have the Management bridge (osm) and the External bridge (ose) .
local_ip is the tunnel interface ip. Management bridge could be used:
https://github.com/paradigmadigital/ansible-openstack-vcenter/blob/develop/etc_ansible/roles/networking-compute-controller/templates/openvswitch_agent.ini.j2
[3] 

The compute node has two bridges with ip addresses configured and
attached to eth0 (osm) and eth1(ose). 

The network+controller server is a kvm virtual machine in the compute
node. eth1 (ose) linked to compute ose and eth0 (osm) linked to compute
osm. ose does not have ip configuration but osm has it. All
communication between network+controller and compute nodes uses osm
network interface. 

How to acces to api and horizon? I use an apache reverse proxy in the
compute node. 

On 2016-07-06 13:50, Daniel Ruiz Molina wrote: 

> Hello,
> 
> I'm getting some problems after installing a small test cloud (one controller that acts as network too and two computes).
> 
> I'm executing all commands that are in http://docs.openstack.org/liberty/install-guide-rdo [1], but when I run an instance, it doesn't receive DHCP IP offer (however, controller+network server show at dashboard that an IP address has been assigned to the instance that is in creating process)
> 
> In my scenario, servers have this configuration:
> server: network+controller --> 3 nics --> 1 with public IP (and for OpenStack management), 1 with private IP for VM data from OpenStack (GRE tunnels) and 1 with no IP for external network (floating IPs)
> computes: 2 nics --> 1 with public IP (and for OpenStack management) and 1 with private IP for VM data from OpenStack (GRE tunnels).
> 
> Now, I'm confused because I don't know if "local_ip" in /etc/neutron/plugins/ml2/openvswitch_conf.ini must have public IP (from mgmt nic) or private IP (from data nic).
> 
> What I want to get is that all communitations between hypervisors run in eth0 (public IP), like schedulers, conductors, nova... and all communitations for the running instances (all traffic from/to br-tun and br-int and all openvswitch data and internal communication between running instances) run in eth1 (private IP)
> 
> I don't know if this scenario could be possible... but I suppose...
> 
> My computers NEED to have an eth0 nic with public IP and an eth1 nic with private IP, so with that nics, I need to configure my cloud (in other words, I can't have a nic with no IP configuration...)
> 
> Anybody could help me?
> 
> Thanks!
> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [2]
> Post to : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [2]
 

Links:
------
[1] http://docs.openstack.org/liberty/install-guide-rdo
[2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[3]
https://github.com/paradigmadigital/ansible-openstack-vcenter/blob/develop/etc_ansible/roles/networking-compute-controller/templates/openvswitch_agent.ini.j2
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160718/50c916fb/attachment.html>


More information about the Openstack mailing list