[Openstack-operators] Network Configuration - Centos 6.2

Hugo R Hernandez hdezmora at gmail.com
Thu Apr 18 13:44:25 UTC 2013


Miguel Angel, the entry on the sudoers file made the trick. Thanks!

-- 
*Hugo R Hernández-Mora*
*
*"Se seus esforços foram vistos com indeferença, não desanime que o sol
faze um espectacolo maravilhoso todas as manhãs enquanto a maioria das
pessoas ainda estão dormindo"
- Anónimo brasileiro


On Thu, Apr 18, 2013 at 2:34 AM, Miguel Angel Diaz Corchero <
miguelangel.diaz at externos.ciemat.es> wrote:

>  Hi Hugo,
>
> In folsom, I fixed the error that you showed in log by adding the line:
>
> nova ALL = (root) NOPASSWD: /usr/bin/nova-rootwrap /etc/nova/rootwrap.conf
> *
>
> in /etc/sudoers
>
> Regards
> Miguel.
>
>
> El 17/04/13 19:53, Hugo R Hernandez escribió:
>
> Hello Openstack operators,
> I'm experiencing a similar problem to what is posted on Fri Jun 1 18:07:26
> 2012 about "Network Configuration - Centos 6.2" on this mailing list (
> http://lists.openstack.org/pipermail/openstack-operators/2012-June/001740.html).
> I have spent now almost two weeks to have this working without any luck.
>
> Essentially, I have a controller node plus a compute node for this project
> and I expect to include a second compute node.  All of these servers have
> four network interfaces so initially I would like to use:
>
>  eth0 for 'public' access (10.12.10.0/23)
> eth3 for private access (192.168.10.192/26)
>
>
> I have been working on different options by using eth3 for bridge (br3 or
> br100 - not sure if I'm forced to use br100 but anyways I have used both of
> them with and without assigning IP), but in any case, I'm not able to have
> this working....
>
> Actually, things are worse as I had at least a service listening between
> the two servers, controller and compute node, but not anymore (State XXX):
>
>  *[root at euler ~]#  nova-manage service list*
> Binary           Host                                 Zone
> Status     State Updated_At
> nova-conductor   euler.example.com                  internal
> enabled    :-)   2013-04-17 16:10:24
> nova-cert        euler.example.com                  internal
> enabled    :-)   2013-04-17 16:10:25
> nova-scheduler   euler.example.com                  internal
> enabled    :-)   2013-04-17 16:10:25
> *nova-compute     fibonacci.example.com              nova
> enabled    XXX   2013-04-17 15:30:09*
>
>
> Installed packages are:
>
> *[root at euler ~]# rpm -qa | grep openstack*
> openstack-utils-2013.1-6.el6.noarch
> openstack-glance-2013.1-1.el6.noarch
> openstack-nova-api-2013.1-2.el6.noarch
> openstack-nova-console-2013.1-2.el6.noarch
> openstack-keystone-2013.1-1.el6.noarch
> openstack-nova-scheduler-2013.1-2.el6.noarch
> openstack-nova-conductor-2013.1-2.el6.noarch
> openstack-nova-2013.1-2.el6.noarch
> openstack-nova-common-2013.1-2.el6.noarch
> openstack-nova-network-2013.1-2.el6.noarch
> openstack-nova-cert-2013.1-2.el6.noarch
> openstack-nova-objectstore-2013.1-2.el6.noarch
> openstack-nova-compute-2013.1-2.el6.noarch
>
>
> When following default documentation I got this error when trying to
> create a network:
>
>  *[root at euler ~]# nova network-create private --fixed-range-v4=
> 192.168.10.192/26 --bridge-interface=br3*
> ERROR: The server has either erred or is incapable of performing the
> requested operation. (HTTP 500) (Request-ID:
> req-7b4227e8-ebd4-444f-84de-4bcc0f42b0dd)
>
>
> For details, here is my nova.conf file as explained on documentation for
> RHEL-based distros version 6 (Openstack Grizzly):
>
>  *[root at euler ~]# cat /etc/nova/nova.conf *
> [DEFAULT]
>
> # LOGS/STATE
> verbose = True
> logdir = /var/log/nova
> state_path = /var/lib/nova
> lock_path = /var/lib/nova/tmp
> rootwrap_config = /etc/nova/rootwrap.conf
>
> # SCHEDULER
> compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
>
> # VOLUMES
> volume_driver = nova.volume.driver.ISCSIDriver
> volume_group = nova-volumes
> volume_name_template = volume-%s
> iscsi_helper = tgtadm
> volumes_dir = /etc/nova/volumes
>
> # DATABASE
> sql_connection = mysql://nova:mySecretPass@10.12.10.35/nova
>
> # COMPUTE
> libvirt_type = kvm
> compute_driver = libvirt.LibvirtDriver
> instance_name_template = instance-%08x
> api_paste_config = /etc/nova/api-paste.ini
>
> # COMPUTE/APIS: if you have separate configs for separate services
> # this flag is required for both nova-api and nova-compute
> allow_resize_to_same_host = True
>
> # APIS
> osapi_compute_extension =
> nova.api.openstack.compute.contrib.standard_extensions
> ec2_dmz_host =
> 10.12.10.35
> s3_host =
> 10.12.10.35
>
> # QPID
> qpid_hostname = 10.12.10.35
>
> # GLANCE
> image_service = nova.image.glance.GlanceImageService
> glance_api_servers = 10.12.10.35:9292
>
> # NETWORK
> network_manager = nova.network.manager.FlatDHCPManager
> force_dhcp_release = True
> dhcpbridge_flagfile = /etc/nova/nova.conf
> dhcpbridge = /usr/bin/nova-dhcpbridge
> firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
> # Change my_ip to match each host
> my_ip = 10.12.10.35
> public_interface = eth3
> flat_network_bridge = br3
> flat_interface = eth3
> fixed_range = 192.168.10.192/26
>
> # NOVNC CONSOLE
> novncproxy_base_url = http://10.12.10.35:6080/vnc_auto.html
> # Change vncserver_proxyclient_address and vncserver_listen to match each
> compute host
> vncserver_proxyclient_address = 10.12.10.35
> vncserver_listen = 10.12.10.35
>
> # GENERAL
> injected_network_template = /usr/share/nova/interfaces.template
> libvirt_nonblocking = True
> libvirt_inject_partition = -1
> rpc_backend = nova.openstack.common.rpc.impl_qpid
>
> # AUTHENTICATION
> auth_strategy = keystone
> [keystone_authtoken]
> auth_host = 127.0.0.1
> auth_port = 35357
> auth_protocol = http
> admin_tenant_name = service
> admin_user = nova
> admin_password = mySecretPass
> signing_dirname = /tmp/keystone-signing-nova
>
>
> When I check status for the nova-network service I get this on both
> controller and compute node:
>
>  *[root at euler ~]# /etc/init.d/openstack-nova-network status*
> openstack-nova-network dead but pid file exists
>
>
> And, on the syslogs I have these entries:
>
>  2013-04-17 11:54:01.447 4026 TRACE nova Traceback (most recent call
> last):
> 2013-04-17 11:54:01.447 4026 TRACE nova   File "/usr/bin/nova-network",
> line 54, in <module>
> 2013-04-17 11:54:01.447 4026 TRACE nova
> service.wait()
>
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/service.py", line 689, in
> wait
> 2013-04-17 11:54:01.447 4026 TRACE nova
> _launcher.wait()
>
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/service.py", line 209, in
> wait
> 2013-04-17 11:54:01.447 4026 TRACE nova     super(ServiceLauncher,
> self).wait()
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/service.py", line 179, in
> wait
> 2013-04-17 11:54:01.447 4026 TRACE nova
> service.wait()
>
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 166, in
> wait
> 2013-04-17 11:54:01.447 4026 TRACE nova     return
> self._exit_event.wait()
>
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/eventlet/event.py", line 116, in
> wait
> 2013-04-17 11:54:01.447 4026 TRACE nova     return
> hubs.get_hub().switch()
>
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch
> 2013-04-17 11:54:01.447 4026 TRACE nova     return self.greenlet.switch()
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 192, in
> main
> 2013-04-17 11:54:01.447 4026 TRACE nova     result = function(*args,
> **kwargs)
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/service.py", line 147, in run_server
> 2013-04-17 11:54:01.447 4026 TRACE nova     server.start()
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/service.py", line 429, in start
> 2013-04-17 11:54:01.447 4026 TRACE nova     self.manager.init_host()
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/network/manager.py", line 1601, in
> init_host
> 2013-04-17 11:54:01.447 4026 TRACE nova
> self.l3driver.initialize(fixed_range=CONF.fixed_range)
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/network/l3.py", line 88, in
> initialize
> 2013-04-17 11:54:01.447 4026 TRACE nova     linux_net.init_host()
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 642, in
> init_host
> 2013-04-17 11:54:01.447 4026 TRACE nova     add_snat_rule(ip_range)
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 632, in
> add_snat_rule
> 2013-04-17 11:54:01.447 4026 TRACE nova     iptables_manager.apply()
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 393, in
> apply
> 2013-04-17 11:54:01.447 4026 TRACE nova     self._apply()
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line
> 228, in inner
> 2013-04-17 11:54:01.447 4026 TRACE nova     retval = f(*args, **kwargs)
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 411, in
> _apply
> 2013-04-17 11:54:01.447 4026 TRACE nova     attempts=5)
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 1146, in
> _execute
> 2013-04-17 11:54:01.447 4026 TRACE nova     return utils.execute(*cmd,
> **kwargs)
> 2013-04-17 11:54:01.447 4026 TRACE nova   File
> "/usr/lib/python2.6/site-packages/nova/utils.py", line 239, in execute
> 2013-04-17 11:54:01.447 4026 TRACE nova     cmd=' '.join(cmd))
> 2013-04-17 11:54:01.447 4026 TRACE nova ProcessExecutionError: Unexpected
> error while running command.
> 2013-04-17 11:54:01.447 4026 TRACE nova Command: sudo nova-rootwrap
> /etc/nova/rootwrap.conf iptables-save -c
> 2013-04-17 11:54:01.447 4026 TRACE nova Exit code: 1
> 2013-04-17 11:54:01.447 4026 TRACE nova Stdout: ''
> 2013-04-17 11:54:01.447 4026 TRACE nova Stderr: 'sudo: sorry, you must
> have a tty to run sudo\n'
>
>  I'm not sure about the last lines as you can see I'm running everything
> as root.   Anyways, I have disabled iptables with same results.  Also, I
> have disabled selinux by default.  At this point, documentation is not that
> helpful but anyways, I have been trying to use the following:
>
>
> http://docs.openstack.org/trunk/openstack-compute/install/yum/content/index.html
> http://fedorapeople.org/~russellb/openstack-lab-rhsummit-2012/index.html
> http://docs.openstack.org/trunk/openstack-compute/admin/content/index.html
>
>
> I have a deadline to abandon this project if there is no progress *but*
> actually, I don't want to do that so I will greatly appreciate any
> help/hint you can provide to me.
>
> Thanks in advance,
> -Hugo
> --
> *Hugo R Hernández-Mora*
> *
> *"Se seus esforços foram vistos com indeferença, não desanime que o sol
> faze um espectacolo maravilhoso todas as manhãs enquanto a maioria das
> pessoas ainda estão dormindo"
> - Anónimo brasileiro
>
>
> _______________________________________________
> OpenStack-operators mailing listOpenStack-operators at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> --
> *Miguel Angel Díaz Corchero*
> *System Administrator / Researcher*
> *c/ Sola nº 1; 10200 TRUJILLO, SPAIN*
> *Tel: +34 927 65 93 17 Fax: +34 927 32 32 37*
>
> [image: CETA-Ciemat logo] <http://www.ceta-ciemat.es/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130418/9f8a5547/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 26213 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130418/9f8a5547/attachment.png>


More information about the OpenStack-operators mailing list