Dear Dan, <br>I have changed my /etc/nova/nova.conf file to reflect proper fixed_ip range, <br>then it start to launch.<br><br>I have one more question,<br><br> does the compute nodes required to have all the nova softwares to run. because on my slave nodes initially it was only nova-compute, later installed nova-scheduler, api, network n all. <br>
<br>Started multiple instances from master node, able to create vm on slave nodes, but the status is shutdown. No idea. <br><br>if I am able to access these from slave nodes, it requires EC2_ACCESS_KEY. <br>By default it is not set, do I need to set it manually. <br>
<br>Could any one please clarify my doubts. <br><br>--Thanks and regards,<br>Praveen GK.<br><br><div class="gmail_quote">On Wed, Aug 17, 2011 at 8:43 PM, Dan Wendlandt <span dir="ltr"><<a href="mailto:dan@nicira.com">dan@nicira.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Hi Praveen, <div><br></div><div>The error you are seeing is because there is no 'network' record in the nova database corresponding to 'br100' (which is the default value for the bridge). Spawning a VM requires finding the appropriate network(s) for that VM in the database, and assigning the VM an IP address from the associated network subnet. </div>
<div><br></div><div>Did you run nova-manage to create a network? If so, can you send out the command you ran? </div><div><br></div><div>For example the Running Nova wiki (<a href="http://wiki.openstack.org/RunningNova" target="_blank">http://wiki.openstack.org/RunningNova</a>) includes the line: </div>
<div><br></div><div><span style="color:rgb(83, 83, 83);font-family:sans-serif;font-size:14px;line-height:18px;background-color:rgb(255, 255, 255)"><pre style="border-top-width:1pt;border-right-width:1pt;border-bottom-width:1pt;border-left-width:1pt;border-top-style:solid;border-right-style:solid;border-bottom-style:solid;border-left-style:solid;border-top-color:rgb(174, 189, 204);border-right-color:rgb(174, 189, 204);border-bottom-color:rgb(174, 189, 204);border-left-color:rgb(174, 189, 204);background-color:rgb(243, 245, 247);padding-top:5pt;padding-right:5pt;padding-bottom:5pt;padding-left:5pt;font-family:courier, monospace;white-space:pre-wrap;word-wrap:break-word">
sudo nova-manage network create novanetwork <a href="http://10.0.0.0/8" target="_blank">10.0.0.0/8</a> 1 64</pre></span><div>Dan</div><div><div></div><div class="h5"><div class="gmail_quote"><br></div><div class="gmail_quote">
<br></div><div class="gmail_quote">On Tue, Aug 16, 2011 at 9:45 PM, praveen_kumar girir <span dir="ltr"><<a href="mailto:gkpraven@gmail.com" target="_blank">gkpraven@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear Mandell,<br>The nova-network process is running. <br>But I am not able to see any log file under /var/log/libvert/qemu/ directory.<br>
When I run describe instance command , I am able to see this output.<br>root@openstack2:~# euca-describe-instances <br>
RESERVATION r-l4zr8lyg bexar default<br>INSTANCE i-0000000c ami-2b84327c networking test (bexar, openstack2) 0 m1.tiny 2011-08-12T11:29:42Z nova <br>RESERVATION r-s8zj5yje bexar default<br>
INSTANCE i-0000000a ami-2b84327c networking test (bexar, openstack2) 0 m1.tiny 2011-08-12T11:29:33Z nova <br><br>logs output:<br><br>nova-compute.log:<br><br>2011-08-17 10:00:46,016 ERROR nova [-] Exception during message handling<br>
(nova): TRACE: Traceback (most recent call last):<br>(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/rpc.py", line 188, in _receive<br>(nova): TRACE: rval = node_func(context=ctxt, **node_args)<br>
(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/exception.py", line 120, in _wrap<br>
(nova): TRACE: return f(*args, **kw)<br>(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 219, in run_instance<br>(nova): TRACE: self.get_network_topic(context),<br>(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 173, in get_network_topic<br>
(nova): TRACE: host = self.network_manager.get_network_host(context)<br>(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/network/manager.py", line 276, in get_network_host<br>(nova): TRACE: FLAGS.flat_network_bridge)<br>
(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/db/api.py", line 620, in network_get_by_bridge<br>(nova): TRACE: return IMPL.network_get_by_bridge(context, bridge)<br>(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/db/sqlalchemy/api.py", line 98, in wrapper<br>
(nova): TRACE: return f(*args, **kwargs)<br>(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/db/sqlalchemy/api.py", line 1294, in network_get_by_bridge<br>(nova): TRACE: raise exception.NotFound(_('No network for bridge %s') % bridge)<br>
<b style="color:rgb(255, 0, 0)">(nova): TRACE: NotFound: No network for bridge br100</b><br>(nova): TRACE: <br>2011-08-17 10:01:45,446 INFO nova.compute.manager [-] Found instance 'instance-0000000b' in DB but no VM. State=0, so assuming spawn is in progress.<br>
<br>Here, I have highlighted the error, <br>here I am pasting br100 details. <br>root@openstack2:~# cat /etc/network/interfaces <br># The loopback network interface<br>auto lo<br>iface lo inet loopback<br><br>auto br100<br>
iface br100 inet static<br> bridge_ports eth0<br> bridge_stp off<br> bridge_maxwait 0<br> bridge_fd 0<br> address 10.223.84.45<br> netmask 255.255.255.0<br> broadcast 10.223.84.255<br>
gateway 10.223.84.251<br> dns-nameservers 10.223.45.36<br>root@openstack2:~#<br><br>could any one help me out here.<br><br>--Thanks and regards,<br>Praveen GK. <br><br><div class="gmail_quote">On Tue, Aug 16, 2011 at 8:24 PM, Mandell Degerness <span dir="ltr"><<a href="mailto:mdegerne@gmail.com" target="_blank">mdegerne@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Check first that the network process is running and not producing<br>
errors. Then check for errors in<br>
/var/log/libvert/qemu/instance-00000001.log. I suspect the issue lies<br>
with, either, the network configuration or with a missing file for<br>
qemu (kvm-pxe).<br>
<br>
-Mandell<br>
<div><div></div><div><br>
On Mon, Aug 15, 2011 at 9:48 PM, praveen_kumar girir <<a href="mailto:gkpraven@gmail.com" target="_blank">gkpraven@gmail.com</a>> wrote:<br>
>> Dear All,<br>
><br>
> I am facing issue while running instances under ubuntu 11.04 server<br>
> edition.<br>
> Steps followed:<br>
><br>
> Installed openstack nova, bexar edition to my cluster.<br>
> check all the running processes<br>
> Able to publish the image.<br>
> Able to describe the images, which change the state from .gz to untarring.<br>
><br>
> Run the command euca-run-instances $emi -k my_key -t m1.tiny.<br>
> After this,<br>
> check the status using, euca-describe-images. the status is shows<br>
> "NETWORKING" rather RUNNING<br>
> When I checked the logs under /var/log/nova/nova-manage.log file, I am able<br>
> to see this error.<br>
> 2011-08-12 17:36:22,305 INFO nova.compute.manager [-] Found instance<br>
> 'instance-0000000e' in DB but no VM. State=0, so assuming spawn is in<br>
> progress.<br>
><br>
> Could any one put some light on this<br>
><br>
> --Thanks and regards,<br>
> Praveen GK,<br>
><br>
><br>
</div></div><div>> _______________________________________________<br>
> Mailing list: <a href="https://launchpad.net/%7Eopenstack" target="_blank">https://launchpad.net/~openstack</a><br>
> Post to : <a href="mailto:openstack@lists.launchpad.net" target="_blank">openstack@lists.launchpad.net</a><br>
> Unsubscribe : <a href="https://launchpad.net/%7Eopenstack" target="_blank">https://launchpad.net/~openstack</a><br>
> More help : <a href="https://help.launchpad.net/ListHelp" target="_blank">https://help.launchpad.net/ListHelp</a><br>
><br>
><br>
<br>
<br>
<br>
</div><font color="#888888">--<br>
Regards,<br>
Mandell Degerness<br>
<br>
"True glory consists in doing what deserves to be written; in writing<br>
what deserves to be read; and in so living as to make the world<br>
happier for our living in it."<br>
Pliny the Elder<br>
</font><div><div></div><div><br>
_______________________________________________<br>
Mailing list: <a href="https://launchpad.net/%7Eopenstack" target="_blank">https://launchpad.net/~openstack</a><br>
Post to : <a href="mailto:openstack@lists.launchpad.net" target="_blank">openstack@lists.launchpad.net</a><br>
Unsubscribe : <a href="https://launchpad.net/%7Eopenstack" target="_blank">https://launchpad.net/~openstack</a><br>
More help : <a href="https://help.launchpad.net/ListHelp" target="_blank">https://help.launchpad.net/ListHelp</a><br>
</div></div></blockquote></div><br>
<br>_______________________________________________<br>
Mailing list: <a href="https://launchpad.net/%7Eopenstack" target="_blank">https://launchpad.net/~openstack</a><br>
Post to : <a href="mailto:openstack@lists.launchpad.net" target="_blank">openstack@lists.launchpad.net</a><br>
Unsubscribe : <a href="https://launchpad.net/%7Eopenstack" target="_blank">https://launchpad.net/~openstack</a><br>
More help : <a href="https://help.launchpad.net/ListHelp" target="_blank">https://help.launchpad.net/ListHelp</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br></div></div>~~~~~~~~~~~~~~~~~~~~~~~~~~~<br><font color="#888888">Dan Wendlandt <br>Nicira Networks, Inc. <br><a href="http://www.nicira.com" target="_blank">www.nicira.com</a> | <a href="http://www.openvswitch.org" target="_blank">www.openvswitch.org</a><br>
Sr. Product Manager <br>cell: <a href="tel:650-906-2650" value="+16509062650" target="_blank">650-906-2650</a><br>~~~~~~~~~~~~~~~~~~~~~~~~~~~<br><br>
</font></div>
</blockquote></div><br>