[Openstack-operators] NoMoreFixedIps Zero fixed ips available.
Gui Maluf
guimalufb at gmail.com
Wed Dec 12 15:16:37 UTC 2012
Nice, I found what was missing!
The vm could not get the route info cause there was no dnsmasq to offer
this range IP
The only way I found to make it work was
# export NETWORK_ID=2
# /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
--domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid
--listen-address=192.168.22.65 --except-interface=lo
--dhcp-range=192.168.22.66,static,120s --dhcp-lease-max=32
--dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
now my VM on new network range can get IP and reach metadata server. But
they can get response from outside Internet and they cant ping the other
network.
On Wed, Dec 12, 2012 at 10:15 AM, Gui Maluf <guimalufb at gmail.com> wrote:
> Anton, actually it's not a fixed ip leaking problem. I'm running out of
> fixed IP cause I couldnt create a wide range network.
>
> So, I restarted the nova-network and now vms are getting the fixed IP with
> the parameter: * --nic net-id=991cc68c-bf49-42ec-91f5-4c26c410a5aa *
> But now the problem is that VMs assigned through this network cant get the
> correct route table
>
> cloud-init-nonet waiting 120 seconds for a network device.
>
> cloud-init-nonet gave up waiting for a network device.
>
> ci-info: lo : 1 127.0.0.1 255.0.0.0 .
>
> ci-info: eth0 : 1 . . fa:16:3e:27:d5:c4
>
> route_info failed
>
>
>
> I can't delete the current network cause there is alot of VMs running inside! But maybe can I modify the range of the current network?
>
>
>
>
> On Wed, Dec 12, 2012 at 9:45 AM, Anton Tarasov <atarasov at mirantis.com>wrote:
>
>> Hi Gui Maluf! You have to destroy your network, and recreate it with new
>> range of ip-address , it is will help you.
>> Or you can go to the following links:
>>
>>
>> https://github.com/openstack/nova/commit/50b9c032fdc520c1461ff4651b60b4fc4b8f8e19
>>
>> https://github.com/openstack/nova/commit/61ab72d15b3ac61b245e0bdd4a7bee5f3a673f75
>>
>> You can find at this links decision too.
>>
>> On Wed, Dec 12, 2012 at 3:25 PM, Gui Maluf <guimalufb at gmail.com> wrote:
>>
>>> Hey guys, I've a set up with 4 servers, 1 running CC+node and 3 running
>>> compute+volume. The network is FlatDHCP.
>>> And I followed the hastexo guide.
>>> The problem is that the network I'd created was only 32 available, and I
>>> created a lot os VMs. But I still need more, and after creating a new
>>> network I cant get more IPs.
>>>
>>> root at cerebro:/var/log/nova# nova-manage network create private
>>> --fixed_range_v4=192.168.22.64/27 --num_networks=1 --bridge=br100
>>> --bridge_interface=eth1 --network_size=32
>>>
>>> root at cerebro:/var/log/nova# nova-manage network list
>>> id IPv4 IPv6 start address
>>> DNS1 DNS2 VlanID project
>>> uuid
>>> 2012-12-12 09:05:22 DEBUG nova.utils
>>> [req-dc28aa03-5b86-4ed6-9daf-2f2ccb51fbbc None None] backend <module
>>> 'nova.db.sqlalchemy.api' from
>>> '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> from
>>> (pid=30864) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
>>> 1 192.168.22.32/27 None 192.168.22.34
>>> 8.8.4.4 None None None
>>> d6c5a754-3afb-445a-8a01-7512d0036eee
>>> 2 192.168.22.64/27 None 192.168.22.66
>>> 8.8.4.4 None None None
>>> 991cc68c-bf49-42ec-91f5-4c26c410a5aa
>>>
>>> I've tried to change the associated project, but nothing happens.
>>> And before I create the first network(32/27) I tried to create with
>>> different values but the network didnt work. That's why I use this small
>>> value for network even knowing that I'll need more fixed IPs.
>>>
>>> I dont know what to do. I need more fixed IPs and I can't delete the vms
>>> I already have.
>>> Any solutions?
>>>
>>> Thanks in advance
>>>
>>>
>>> nova-compute error:
>>> 2012-12-12 08:54:38 TRACE nova.compute.manager [instance:
>>> 761b8616-9b61-425a-b1ce-7b94bf3477ac]
>>> 2012-12-12 08:54:38 ERROR nova.rpc.amqp
>>> [req-315f5588-af60-4645-b9f8-4520e7074ebf b2372e326c0548dfa71ed42e671d0c97
>>> 2337dcf9aa9144f9b0605a481bb6dfb5] Exception during message handling
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp Traceback (most recent call
>>> last):
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 253, in
>>> _process_data
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp rval =
>>> node_func(context=ctxt, **node_args)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp return f(*args, **kw)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 183, in
>>> decorated_function
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp sys.exc_info())
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp self.gen.next()
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 177, in
>>> decorated_function
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp return function(self,
>>> context, instance_uuid, *args, **kwargs)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 676, in
>>> run_instance
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp do_run_instance()
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/utils.py", line 945, in inner
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp retval = f(*args, **kwargs)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 675, in
>>> do_run_instance
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp self._run_instance(context,
>>> instance_uuid, **kwargs)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 476, in
>>> _run_instance
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp
>>> self._set_instance_error_state(context, instance_uuid)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp self.gen.next()
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 449, in
>>> _run_instance
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp requested_networks)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 584, in
>>> _allocate_network
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp
>>> requested_networks=requested_networks)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 178, in
>>> allocate_for_instance
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp 'args': args})
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/rpc/__init__.py", line 68, in call
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp return
>>> _get_impl().call(context, topic, msg, timeout)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py", line 674, in call
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp return
>>> rpc_amqp.call(context, topic, msg, timeout, Connection.pool)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 343, in call
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp rv = list(rv)
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp File
>>> "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 311, in __iter__
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp raise result
>>> 2012-12-12 08:54:38 TRACE nova.rpc.amqp RemoteError: Remote error:
>>> NoMoreFixedIps Zero fixed ips available.
>>>
>>>
>>> --
>>> *guilherme* \n
>>> \t *maluf*
>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Tony Tarasov
>>
>>
>
>
> --
> *guilherme* \n
> \t *maluf*
>
>
--
*guilherme* \n
\t *maluf*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20121212/3fbdefe8/attachment.html>
More information about the OpenStack-operators
mailing list