From akapadia_usa at yahoo.com Mon Oct 3 21:44:55 2011 From: akapadia_usa at yahoo.com (Amar Kapadia) Date: Mon, 3 Oct 2011 14:44:55 -0700 (PDT) Subject: [Openstack-operators] Swift Install "Check that you can HEAD the account" step fails Message-ID: <1317678295.35506.YahooMailNeo@web161717.mail.bf1.yahoo.com> Hi, I am using the instructions on this page:?http://swift.openstack.org/howto_installmultinode.html. I assume this works for Diablo release of Swift as well. I am on STEP 2 "Check that you can HEAD the account" of the "Create Swift admin account and test" step i.e.:? curl-k -v -H 'X-Auth-Token: ' I did this and replaced the token & URL as per below. However the command failed.? Any ideas how I can fix this? Thanks, in advance. Regards, Amar === curl -k -v -H 'X-Auth-Token: AUTH_tk9dac19fd2c87484bb7c9e779b0478968'?https://$PROXY_LOCAL_NET_IP:8080/v1/AUTH_system/ * About to connect() to 10.10.10.61 port 8080 (#0) *? Trying 10.10.10.61... connected * Connected to 10.10.10.61 (10.10.10.61) port 8080 (#0) * successfully set certificate verify locations: *? CAfile: none ? CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using AES256-SHA * Server certificate: *? ? ? subject: C=US; ST=CA; L=San Francisco; O=UL; emailAddress=xxx at yyy.com *? ? ? start date:?2011-10-02 09:03:42 GMT *? ? ? expire date:?2011-11-01 09:03:42 GMT * SSL: unable to obtain common name from peer certificate > GET /v1/AUTH_system/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: 10.10.10.61:8080 > Accept: */* > X-Auth-Token: AUTH_tk9dac19fd2c87484bb7c9e779b0478968 > < HTTP/1.1 401 Unauthorized < Content-Length: 358 < Content-Type: text/html; charset=UTF-8 < Date:?Mon, 03 Oct 2011 21:13:58 GMT < 401 Unauthorized

401 Unauthorized

? This server could not verify that you are authorized to access the document you requested. Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to supply the credentials required.

-------------- next part -------------- An HTML attachment was scrubbed... URL: From akapadia_usa at yahoo.com Sun Oct 9 18:35:50 2011 From: akapadia_usa at yahoo.com (Amar Kapadia) Date: Sun, 9 Oct 2011 11:35:50 -0700 (PDT) Subject: [Openstack-operators] (no subject) Message-ID: <1318185350.24665.YahooMailNeo@web161701.mail.bf1.yahoo.com> I've finished installing Swift on 6 EC2 nodes, but I'm struggling on this seemingly simple step: http://docs.openstack.org/diablo/openstack-object-storage/admin/content/part-i-setting-up-secure-access.html Some quick questions: 1. I'm probably missing something obvious but where do I get the "swift" tool from? 2. Also, do these below iptables look OK? 3. Finally, do I have to restart some service to have the new iptables read?? Thanks, Amar Chain INPUT (policy ACCEPT 454 packets, 36014 bytes) ?pkts bytes target ? ? prot opt in ? ? out ? ? source ? ? ? ? ? ? ? destination 45287 4651K ACCEPT ? ? all ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?ctstate RELATED,ESTABLISHED ? ? 0 ? ? 0 ACCEPT ? ? all ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?state RELATED,ESTABLISHED ?3505 ?210K ACCEPT ? ? tcp ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?tcp dpt:ssh ? ?12 ? 720 ACCEPT ? ? tcp ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?tcp dpt:https ? ? 0 ? ? 0 LOG ? ? ? ?all ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?limit: avg 5/min burst 5 LOG level debug prefix `iptables denied: ' ? ? 7 ? 408 ACCEPT ? ? tcp ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?tcp dpt:www ? ? 0 ? ? 0 ACCEPT ? ? tcp ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?tcp dpt:https Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) ?pkts bytes target ? ? prot opt in ? ? out ? ? source ? ? ? ? ? ? ? destination Chain OUTPUT (policy ACCEPT 48502 packets, 8622K bytes) ?pkts bytes target ? ? prot opt in ? ? out ? ? source ? ? ? ? ? ? ? destination -------------- next part -------------- An HTML attachment was scrubbed... URL: From linuxdatacenter at gmail.com Mon Oct 10 09:33:57 2011 From: linuxdatacenter at gmail.com (Linux Datacenter) Date: Mon, 10 Oct 2011 11:33:57 +0200 Subject: [Openstack-operators] Rabbitmq Message-ID: Hi, After I upgraded to diablo, I see a dramatic slowdown when launching and terminating vm-s. When I submit creation of 10 or more vm-s, nova headnode almost freezes. It takes around 3 minutes for the vm-s to spawn. Also euca-terminate-instances deletes instances with lags (about a minute) when destroying 10 or more machines. I also observe instability in rabbitmq server. It freezes occasionally and I need to restart rabbitmq server, nova-api, nova-scheduler to make the whole thing work again. I did not have any of these with cactus release. Has anybody run into such issues as mine with diablo? Do you have a remedy for this? Cheers, -Piotr -- checkout my blog on linux clusters: -- linuxdatacenter.blogspot.com -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From slyphon at gmail.com Wed Oct 12 15:08:01 2011 From: slyphon at gmail.com (Jonathan Simms) Date: Wed, 12 Oct 2011 11:08:01 -0400 Subject: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift Message-ID: Hello all, I'm in the middle of a 120T Swift deployment, and I've had some concerns about the backing filesystem. I formatted everything with ext4 with 1024b inodes (for storing xattrs), but the process took so long that I'm now looking at XFS again. In particular, this concerns me http://xfs.org/index.php/XFS_FAQ#Write_barrier_support. In the swift documentation, it's recommended to mount the filesystems w/ 'nobarrier', but it would seem to me that this would leave the data open to corruption in the case of a crash. AFAIK, swift doesn't do checksumming (and checksum checking) of stored data (after it is written), which would mean that any data corruption would silently get passed back to the users. Now, I haven't had operational experience running XFS in production, I've mainly used ZFS, JFS, and ext{3,4}. Are there any recommendations for using XFS safely in production? From btorch-os at zeroaccess.org Thu Oct 13 16:18:21 2011 From: btorch-os at zeroaccess.org (Marcelo Martins) Date: Thu, 13 Oct 2011 11:18:21 -0500 Subject: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift In-Reply-To: References: Message-ID: <395F6A92-D224-4A3D-BEC5-87625204DC93@zeroaccess.org> Hi Jonathan, I guess that will depend on how your storage nodes are configured (hardware wise). The reason why it's recommended is because the storage drives are actually attached to a controller that has RiW cache enabled. Q. Should barriers be enabled with storage which has a persistent write cache? Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with "nobarrier". But take care about the hard disk write cache, which should be off. Marcelo Martins Openstack-swift btorch-os at zeroaccess.org ?Knowledge is the wings on which our aspirations take flight and soar. When it comes to surfing and life if you know what to do you can do it. If you desire anything become educated about it and succeed. ? On Oct 12, 2011, at 10:08 AM, Jonathan Simms wrote: > Hello all, > > I'm in the middle of a 120T Swift deployment, and I've had some > concerns about the backing filesystem. I formatted everything with > ext4 with 1024b inodes (for storing xattrs), but the process took so > long that I'm now looking at XFS again. In particular, this concerns > me http://xfs.org/index.php/XFS_FAQ#Write_barrier_support. > > In the swift documentation, it's recommended to mount the filesystems > w/ 'nobarrier', but it would seem to me that this would leave the data > open to corruption in the case of a crash. AFAIK, swift doesn't do > checksumming (and checksum checking) of stored data (after it is > written), which would mean that any data corruption would silently get > passed back to the users. > > Now, I haven't had operational experience running XFS in production, > I've mainly used ZFS, JFS, and ext{3,4}. Are there any recommendations > for using XFS safely in production? > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From linuxcole at gmail.com Thu Oct 13 20:50:36 2011 From: linuxcole at gmail.com (Cole Crawford) Date: Thu, 13 Oct 2011 13:50:36 -0700 Subject: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift In-Reply-To: <395F6A92-D224-4A3D-BEC5-87625204DC93@zeroaccess.org> References: <395F6A92-D224-4A3D-BEC5-87625204DC93@zeroaccess.org> Message-ID: generally mounting with -o nobarrier is a bad idea (ext4 or xfs), unless you have disks that do not have write caches. don't follow that recommendation, or for example - fsync won't work which is something swift relies upon. On Thu, Oct 13, 2011 at 9:18 AM, Marcelo Martins wrote: > Hi Jonathan, > > > I guess that will depend on how your storage nodes are configured (hardware > wise). The reason why it's recommended is because the storage drives are > actually attached to a controller that has RiW cache enabled. > > > > Q. Should barriers be enabled with storage which has a persistent write > cache? > Many hardware RAID have a persistent write cache which preserves it across > power failure, interface resets, system crashes, etc. Using write barriers > in this instance is not recommended and will in fact lower performance. > Therefore, it is recommended to turn off the barrier support and mount the > filesystem with "nobarrier". But take care about the hard disk write cache, > which should be off. > > > Marcelo Martins > Openstack-swift > btorch-os at zeroaccess.org > > ?Knowledge is the wings on which our aspirations take flight and soar. When > it comes to surfing and life if you know what to do you can do it. If you > desire anything become educated about it and succeed. ? > > > > > On Oct 12, 2011, at 10:08 AM, Jonathan Simms wrote: > > Hello all, > > I'm in the middle of a 120T Swift deployment, and I've had some > concerns about the backing filesystem. I formatted everything with > ext4 with 1024b inodes (for storing xattrs), but the process took so > long that I'm now looking at XFS again. In particular, this concerns > me http://xfs.org/index.php/XFS_FAQ#Write_barrier_support. > > In the swift documentation, it's recommended to mount the filesystems > w/ 'nobarrier', but it would seem to me that this would leave the data > open to corruption in the case of a crash. AFAIK, swift doesn't do > checksumming (and checksum checking) of stored data (after it is > written), which would mean that any data corruption would silently get > passed back to the users. > > Now, I haven't had operational experience running XFS in production, > I've mainly used ZFS, JFS, and ext{3,4}. Are there any recommendations > for using XFS safely in production? > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.irving at sophos.com Thu Oct 13 22:11:59 2011 From: gordon.irving at sophos.com (Gordon Irving) Date: Thu, 13 Oct 2011 18:11:59 -0400 Subject: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift In-Reply-To: References: <395F6A92-D224-4A3D-BEC5-87625204DC93@zeroaccess.org> Message-ID: If you are on a Battery Backed Unit raid controller, then its generally safe to disable barriers for journal filesystems. If your doing soft raid, jbod, single disk arrays or cheaped out and did not get a BBU then you may want to enable barriers for filesystem consistency. For raid cards with a BBU then set your io scheduler to noop, and disable barriers. The raid card does its own re-ordering of io operations, the OS has an incomplete picture of the true drive geometry. The raid card is emulating one disk geometry which could be an array of 2 - 100+ disks. The OS simply can not make good judgment calls on how best to schedule io to different parts of the disk because its built around the assumption of a single spinning disk. This is also true for if a write has made it safely non persistent cache (ie disk cache), to a persistent cache (ie the battery in your raid card) or persistent storage (that array of disks) . This is a failure of the Raid card <-> OS interface. There simply is not the richness to say (signal write is ok if on platter or persistent cache not okay in disk cache) or Enabling barriers effectively turns all writes into Write-Through operations, so the write goes straight to the disk platter and you get little performance benefit from the raid card (which hurts a lot in terms of lost iops). If the BBU looses charge/fails then the raid controller downgrades to Write-Through (vs Write-Backed) operation. BBU raid controllers disable disk caches, as these are not safe in event of power loss, and do not provide any benefit over the raid card cache. In the context of swift, hdfs and other highly replicated datastores, I run them in jbod or raid-0 + nobarrier , noatime, nodiratime with a filesystem aligned to the geometry of underlying storage* etc to squeeze as much performance as possible out of the raw storage. Let the application layer deal with redundancy of data across the network, if a machine /disk dies ... so what, you have N other copies of that data elsewhere on the network. A bit of storage is lost ... do consider how many nodes can be down at any time when operating these sorts of clusters Big boxen with lots of storage may seem attractive from a density perspective until you loose one and 25% of your storage capacity with it ... many smaller baskets ... For network level data consistency swift should have a data scrubber (periodic process to read and compare checksums of replicated blocks), I have not checked if this is implemented or on the roadmap. I would be very surprised if this was not a part of swift. *you can hint to the fs layer how to offset block writes by specifying a stride width which is the number of data carrying disks in the array and the block size typically the default is 64k for raid arrays From: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] On Behalf Of Cole Crawford Sent: 13 October 2011 13:51 To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift generally mounting with -o nobarrier is a bad idea (ext4 or xfs), unless you have disks that do not have write caches. don't follow that recommendation, or for example - fsync won't work which is something swift relies upon. On Thu, Oct 13, 2011 at 9:18 AM, Marcelo Martins > wrote: Hi Jonathan, I guess that will depend on how your storage nodes are configured (hardware wise). The reason why it's recommended is because the storage drives are actually attached to a controller that has RiW cache enabled. Q. Should barriers be enabled with storage which has a persistent write cache? Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with "nobarrier". But take care about the hard disk write cache, which should be off. Marcelo Martins Openstack-swift btorch-os at zeroaccess.org "Knowledge is the wings on which our aspirations take flight and soar. When it comes to surfing and life if you know what to do you can do it. If you desire anything become educated about it and succeed. " On Oct 12, 2011, at 10:08 AM, Jonathan Simms wrote: Hello all, I'm in the middle of a 120T Swift deployment, and I've had some concerns about the backing filesystem. I formatted everything with ext4 with 1024b inodes (for storing xattrs), but the process took so long that I'm now looking at XFS again. In particular, this concerns me http://xfs.org/index.php/XFS_FAQ#Write_barrier_support. In the swift documentation, it's recommended to mount the filesystems w/ 'nobarrier', but it would seem to me that this would leave the data open to corruption in the case of a crash. AFAIK, swift doesn't do checksumming (and checksum checking) of stored data (after it is written), which would mean that any data corruption would silently get passed back to the users. Now, I haven't had operational experience running XFS in production, I've mainly used ZFS, JFS, and ext{3,4}. Are there any recommendations for using XFS safely in production? _______________________________________________ Openstack-operators mailing list Openstack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ Openstack-operators mailing list Openstack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ________________________________ Sophos Limited, The Pentagon, Abingdon Science Park, Abingdon, OX14 3YP, United Kingdom. Company Reg No 2096520. VAT Reg No GB 991 2418 08. -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.O'Loughlin at surrey.ac.uk Fri Oct 14 14:13:24 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Fri, 14 Oct 2011 15:13:24 +0100 Subject: [Openstack-operators] configuring the scheduler Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C7A@EXMB01CMS.surrey.ac.uk> Hi, I'm running Diablo and looking for advice on configuring the scheduler. I have compute nodes of differing capabilities and would like to be able to describe that to the scheduler so it can make decisions based on that information. Does anybody know how to do something like this? Kind Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From J.O'Loughlin at surrey.ac.uk Fri Oct 14 14:16:35 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Fri, 14 Oct 2011 15:16:35 +0100 Subject: [Openstack-operators] second availability zone Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C7B@EXMB01CMS.surrey.ac.uk> Hi, is it possible to set up a second availability zone on diablo? If anybody knows how to do this would be very interested in hearing from them. Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From sergey at kulanov.org.ua Fri Oct 14 18:57:53 2011 From: sergey at kulanov.org.ua (Sergey Kulanov) Date: Fri, 14 Oct 2011 21:57:53 +0300 Subject: [Openstack-operators] Problem running Openstack on fedora 16 (nova segfault) Message-ID: <4E988631.10905@kulanov.org.ua> Hi, I have the following troubles while using Openstack on Fedora 16: SOFTWARE (what do we have): - Fedora 16 Beta - Linux server1.example.com 3.1.0-0.rc9.git0.0.fc16.i686.PAE #1 SMP Wed Oct 5 15:51:55 UTC 2011 i686 i686 i386 GNU/Linux - openstack-swift-auth-1.4.0-2.fc16.noarch openstack-glance-2011.3-1.fc16.noarch openstack-swift-1.4.0-2.fc16.noarch openstack-swift-proxy-1.4.0-2.fc16.noarch openstack-swift-account-1.4.0-2.fc16.noarch openstack-nova-2011.3-3.fc16.noarch openstack-swift-object-1.4.0-2.fc16.noarch openstack-swift-container-1.4.0-2.fc16.noarch - glibc-2.14.90-11 -python-2.7.2-4.fc16.i686 I tried to follow this instruction http://fedoraproject.org/wiki/Getting_started_with_OpenStack_Nova INSTALLATION: 1) Some installation warnings: Downloading Packages: (1/5): openstack-swift-account-1.4.0-2.fc16.noarch.rpm | 26 kB 00:00 (2/5): openstack-swift-auth-1.4.0-2.fc16.noarch.rpm | 9.9 kB 00:00 (3/5): openstack-swift-container-1.4.0-2.fc16.noarch.rpm | 26 kB 00:00 (4/5): openstack-swift-object-1.4.0-2.fc16.noarch.rpm | 44 kB 00:00 (5/5): openstack-swift-proxy-1.4.0-2.fc16.noarch.rpm | 37 kB 00:00 ---------------------------------------------------------------------------------------------------------------------------------------- Total 137 kB/s | 143 kB 00:01 Running Transaction Check Running Transaction Test Transaction Test Succeeded Running Transaction Installing : openstack-swift-object-1.4.0-2.fc16.noarch 1/5 Non-fatal POSTIN scriptlet failure in rpm package openstack-swift-object-1.4.0-2.fc16.noarch error reading information on service swift-object: No such file or directory warning: %post(openstack-swift-object-1.4.0-2.fc16.noarch) scriptlet failed, exit status 1 Installing : openstack-swift-proxy-1.4.0-2.fc16.noarch 2/5 Non-fatal POSTIN scriptlet failure in rpm package openstack-swift-proxy-1.4.0-2.fc16.noarch error reading information on service swift-proxy: No such file or directory warning: %post(openstack-swift-proxy-1.4.0-2.fc16.noarch) scriptlet failed, exit status 1 Installing : openstack-swift-auth-1.4.0-2.fc16.noarch 3/5 Non-fatal POSTIN scriptlet failure in rpm package openstack-swift-auth-1.4.0-2.fc16.noarch error reading information on service swift-auth: No such file or directory warning: %post(openstack-swift-auth-1.4.0-2.fc16.noarch) scriptlet failed, exit status 1 Installing : openstack-swift-account-1.4.0-2.fc16.noarch 4/5 Non-fatal POSTIN scriptlet failure in rpm package openstack-swift-account-1.4.0-2.fc16.noarch error reading information on service swift-account: No such file or directory warning: %post(openstack-swift-account-1.4.0-2.fc16.noarch) scriptlet failed, exit status 1 Installing : openstack-swift-container-1.4.0-2.fc16.noarch 5/5 Non-fatal POSTIN scriptlet failure in rpm package openstack-swift-container-1.4.0-2.fc16.noarch error reading information on service swift-container: No such file or directory warning: %post(openstack-swift-container-1.4.0-2.fc16.noarch) scriptlet failed, exit status 1 Installed: openstack-swift-account.noarch 0:1.4.0-2.fc16 openstack-swift-auth.noarch 0:1.4.0-2.fc16 openstack-swift-container.noarch 0:1.4.0-2.fc16 openstack-swift-object.noarch 0:1.4.0-2.fc16 openstack-swift-proxy.noarch 0:1.4.0-2.fc16 Complete! 2) RUNNING: I didn't change any default setting (just add debugging flag) [root at server1 ~]# service openstack-glance-api start; service openstack-glance-registry start /var/log/messages Oct 14 21:20:31 server1 glance-api[1404]: Traceback (most recent call last): Oct 14 21:20:31 server1 glance-api[1404]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 336, in fire_timers Oct 14 21:20:31 server1 glance-api[1404]: timer() Oct 14 21:20:31 server1 glance-api[1404]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 56, in __call__ Oct 14 21:20:31 server1 glance-api[1404]: cb(*args, **kw) Oct 14 21:20:31 server1 glance-api[1404]: SystemError: error return without exception set Oct 14 21:25:37 server1 glance-registry[1460]: Traceback (most recent call last): Oct 14 21:25:37 server1 glance-registry[1460]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 336, in fire_timers Oct 14 21:25:37 server1 glance-registry[1460]: timer() Oct 14 21:25:37 server1 glance-registry[1460]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 56, in __call__ Oct 14 21:25:37 server1 glance-registry[1460]: cb(*args, **kw) Oct 14 21:25:37 server1 glance-registry[1460]: SystemError: error return without exception set [root at server1 ~]# service openstack-glance-api status;service openstack-glance-registry status Redirecting to /bin/systemctl status openstack-glance-api.service Loaded: loaded (/lib/systemd/system/openstack-glance-api.service; disabled) Active: active (running) since Fri, 14 Oct 2011 21:20:28 +0300; 6min ago Main PID: 1404 (glance-api) CGroup: name=systemd:/system/openstack-glance-api.service ? 1404 /usr/bin/python /usr/bin/glance-api --config-file /etc/glance/glance-api.conf openstack-glance-registry.service - OpenStack Image Service (code-named Glance) Registry server Loaded: loaded (/lib/systemd/system/openstack-glance-registry.service; disabled) Active: active (running) since Fri, 14 Oct 2011 21:25:37 +0300; 1min 29s ago Main PID: 1460 (glance-registry) CGroup: name=systemd:/system/openstack-glance-registry.service ? 1460 /usr/bin/python /usr/bin/glance-registry --config-file /etc/glance/glance-registry.conf ---------------------------NOVA START ----------------------- service openstack-nova-api start /var/log/messages Oct 14 21:30:55 server1 kernel: [ 1186.188224] nova-api[1560]: segfault at 4 ip 0025a950 sp bfef9108 error 4 in libc-2.14.90.so[115000+1a7000] Oct 14 21:30:55 server1 systemd[1]: openstack-nova-api.service: main process exited, code=killed, status=11 Oct 14 21:30:55 server1 systemd[1]: Unit openstack-nova-api.service entered failed state. /var/log/nova/api.log 2011-10-14 21:30:54,785 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(created_at, Column) 2011-10-14 21:30:54,804 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(updated_at, Column) 2011-10-14 21:30:54,805 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(deleted_at, Column) 2011-10-14 21:30:54,805 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(deleted, Column) 2011-10-14 21:30:54,805 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(id, Column) 2011-10-14 21:30:54,805 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(name, Column) 2011-10-14 21:30:54,805 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(disk_format, Column) 2011-10-14 21:30:54,806 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(container_format, Column) 2011-10-14 21:30:54,806 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(size, Column) 2011-10-14 21:30:54,807 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(status, Column) 2011-10-14 21:30:54,807 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(is_public, Column) 2011-10-14 21:30:54,807 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(location, Column) 2011-10-14 21:30:54,807 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(checksum, Column) 2011-10-14 21:30:54,807 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(min_disk, Column) 2011-10-14 21:30:54,808 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(min_ram, Column) 2011-10-14 21:30:54,808 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(owner, Column) 2011-10-14 21:30:54,808 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) Identified primary key columns: ColumnSet([Column('id', Integer(), table=, primary_key=True, nullable=False)]) 2011-10-14 21:30:54,809 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) constructed 2011-10-14 21:30:54,811 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(image, RelationshipProperty) 2011-10-14 21:30:54,812 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(created_at, Column) 2011-10-14 21:30:54,812 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(updated_at, Column) 2011-10-14 21:30:54,812 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(deleted_at, Column) 2011-10-14 21:30:54,812 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(deleted, Column) 2011-10-14 21:30:54,813 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(id, Column) 2011-10-14 21:30:54,813 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(image_id, Column) 2011-10-14 21:30:54,813 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(name, Column) 2011-10-14 21:30:54,813 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(value, Column) 2011-10-14 21:30:54,814 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) Identified primary key columns: ColumnSet([Column('id', Integer(), table=, primary_key=True, nullable=False)]) 2011-10-14 21:30:54,814 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) constructed 2011-10-14 21:30:54,816 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(image, RelationshipProperty) 2011-10-14 21:30:54,817 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(created_at, Column) 2011-10-14 21:30:54,817 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(updated_at, Column) 2011-10-14 21:30:54,817 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(deleted_at, Column) 2011-10-14 21:30:54,817 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(deleted, Column) 2011-10-14 21:30:54,818 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(id, Column) 2011-10-14 21:30:54,818 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(image_id, Column) 2011-10-14 21:30:54,818 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(member, Column) 2011-10-14 21:30:54,818 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(can_share, Column) 2011-10-14 21:30:54,819 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) Identified primary key columns: ColumnSet([Column('id', Integer(), table=, primary_key=True, nullable=False)]) 2011-10-14 21:30:54,819 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) constructed 2011-10-14 21:30:55,188 DEBUG nova.utils [-] Running sh /usr/lib/python2.7/site-packages/nova/api/ec2/../../CA/genrootca.sh from (pid=1560) runthis /usr/lib/python2.7/site-packages/nova/utils.py:275 2011-10-14 21:30:55,188 DEBUG nova.utils [-] Running cmd (subprocess): sh /usr/lib/python2.7/site-packages/nova/api/ec2/../../CA/genrootca.sh from (pid=1560) execute /usr/lib/python2.7/site-packages/nova/utils.py:165 I didn't find any solution with segfault at 4 ip 0025a950 sp bfef9108 error 4 in libc-2.14.90.so[115000+1a7000] Thank you ------------ Kind regards, Sergey From Kevin.Fox at pnnl.gov Fri Oct 14 20:45:06 2011 From: Kevin.Fox at pnnl.gov (Kevin Fox) Date: Fri, 14 Oct 2011 13:45:06 -0700 Subject: [Openstack-operators] Nova/libvirt Message-ID: <1318625106.30457.5929.camel@sledge.emsl.pnl.gov> Quick question. How strongly does Nova assume it manages everything in libvirt? I'm curious if I wanted to fire up a few virtual machines manually using libvirt on a nova managed host if it would confuse Nova. I'm ok with Nova not knowing about them. I'm wondering if Nova will assume it has more resources then are available and flip out. Thanks, Kevin From markmc at redhat.com Mon Oct 17 10:43:30 2011 From: markmc at redhat.com (Mark McLoughlin) Date: Mon, 17 Oct 2011 11:43:30 +0100 Subject: [Openstack-operators] Problem running Openstack on fedora 16 (nova segfault) In-Reply-To: <4E988631.10905@kulanov.org.ua> References: <4E988631.10905@kulanov.org.ua> Message-ID: <1318848212.2048.8.camel@sorcha> Hi Sergey, On Fri, 2011-10-14 at 21:57 +0300, Sergey Kulanov wrote: > Installing : > openstack-swift-object-1.4.0-2.fc16.noarch > 1/5 > Non-fatal POSTIN scriptlet failure in rpm package > openstack-swift-object-1.4.0-2.fc16.noarch > error reading information on service swift-object: No such file or directory > warning: %post(openstack-swift-object-1.4.0-2.fc16.noarch) scriptlet > failed, exit status 1 It looks like this problem was reported sometime ago and a patch is waiting to be applied: https://bugzilla.redhat.com/685155 Silas, David - can one of you take care of this or should I? Cheers, Mark. From sacampa at gmv.com Mon Oct 17 10:55:02 2011 From: sacampa at gmv.com (Sergio Ariel de la Campa Saiz) Date: Mon, 17 Oct 2011 12:55:02 +0200 Subject: [Openstack-operators] Storage tape and Swift Message-ID: <947E2550A3F9C740936DDCC9936667B901BE41AEC30B@GMVMAIL4.gmv.es> Hi... Can somebody tell me how to add some storage tapes to a Swift cluster? The main problem is that I have data loaded in these tapes, so I can?t erase them. Thanks... [cid:image002.png at 01CC8CCB.FC71ADE0] [cid:image003.gif at 01CC8CCB.DB36E780] Sergio Ariel de la Campa Saiz Ingeniero de Infraestructuras / Infrastucture Engineer / GMV Isaac Newton, 11 P.T.M. Tres Cantos E-28760 Madrid Tel. +34 91 807 21 00 Fax +34 91 807 21 99 www.gmv.com [cid:image004.gif at 01CC8CCB.DB36E780] [cid:image005.gif at 01CC8CCB.DB36E780] [cid:image006.gif at 01CC8CCB.DB36E780] [cid:image007.gif at 01CC8CCB.DB36E780] ______________________ This message including any attachments may contain confidential information, according to our Information Security Management System, and intended solely for a specific individual to whom they are addressed. Any unauthorised copy, disclosure or distribution of this message is strictly forbidden. If you have received this transmission in error, please notify the sender immediately and delete it. ______________________ Este mensaje, y en su caso, cualquier fichero anexo al mismo, puede contener informacion clasificada por su emisor como confidencial en el marco de su Sistema de Gestion de Seguridad de la Informacion siendo para uso exclusivo del destinatario, quedando prohibida su divulgacion copia o distribucion a terceros sin la autorizacion expresa del remitente. Si Vd. ha recibido este mensaje erroneamente, se ruega lo notifique al remitente y proceda a su borrado. Gracias por su colaboracion. ______________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 5711 bytes Desc: image003.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.gif Type: image/gif Size: 1306 bytes Desc: image004.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.gif Type: image/gif Size: 1309 bytes Desc: image005.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.gif Type: image/gif Size: 1279 bytes Desc: image006.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.gif Type: image/gif Size: 1323 bytes Desc: image007.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 222 bytes Desc: image002.png URL: From markmc at redhat.com Mon Oct 17 11:02:54 2011 From: markmc at redhat.com (Mark McLoughlin) Date: Mon, 17 Oct 2011 12:02:54 +0100 Subject: [Openstack-operators] Problem running Openstack on fedora 16 (nova segfault) In-Reply-To: <4E988631.10905@kulanov.org.ua> References: <4E988631.10905@kulanov.org.ua> Message-ID: <1318849375.2048.11.camel@sorcha> On Fri, 2011-10-14 at 21:57 +0300, Sergey Kulanov wrote: > ---------------------------NOVA START ----------------------- > service openstack-nova-api start > > /var/log/messages > Oct 14 21:30:55 server1 kernel: [ 1186.188224] nova-api[1560]: segfault > at 4 ip 0025a950 sp bfef9108 error 4 in libc-2.14.90.so[115000+1a7000] > Oct 14 21:30:55 server1 systemd[1]: openstack-nova-api.service: main > process exited, code=killed, status=11 > Oct 14 21:30:55 server1 systemd[1]: Unit openstack-nova-api.service > entered failed state. So, there are some known problems with 2.14.90-11 https://admin.fedoraproject.org/updates/FEDORA-2011-14175 Could you try: $> yum downgrade glibc? Hopefully that will get you 2.14.90-10 Thanks, Mark. From sergey at kulanov.org.ua Mon Oct 17 16:15:36 2011 From: sergey at kulanov.org.ua (Sergey Kulanov) Date: Mon, 17 Oct 2011 19:15:36 +0300 Subject: [Openstack-operators] Problem running Openstack on fedora 16 (nova segfault) In-Reply-To: <1318849375.2048.11.camel@sorcha> References: <4E988631.10905@kulanov.org.ua> <1318849375.2048.11.camel@sorcha> Message-ID: <4E9C54A8.9070807@kulanov.org.ua> 17.10.2011 14:02, Mark McLoughlin ?????: > On Fri, 2011-10-14 at 21:57 +0300, Sergey Kulanov wrote: > >> ---------------------------NOVA START ----------------------- >> service openstack-nova-api start >> >> /var/log/messages >> Oct 14 21:30:55 server1 kernel: [ 1186.188224] nova-api[1560]: segfault >> at 4 ip 0025a950 sp bfef9108 error 4 in libc-2.14.90.so[115000+1a7000] >> Oct 14 21:30:55 server1 systemd[1]: openstack-nova-api.service: main >> process exited, code=killed, status=11 >> Oct 14 21:30:55 server1 systemd[1]: Unit openstack-nova-api.service >> entered failed state. > So, there are some known problems with 2.14.90-11 > > https://admin.fedoraproject.org/updates/FEDORA-2011-14175 > > Could you try: > > $> yum downgrade glibc? > > Hopefully that will get you 2.14.90-10 > > Thanks, > Mark. > > Hi, Thanks for the replay Actually I tried different glibc versions end even installing openstack on fedora 15, I had the same problem. $> yum downgrade glibc glibc-common $> service openstack-glance-api start $> service openstack-glance-registry start works fine, everything starts ok but with some warnings: Oct 17 18:58:35 server1 yum[2795]: Installed: glibc-2.14.90-10.i686 Oct 17 18:58:45 server1 yum[2795]: Installed: glibc-common-2.14.90-10.i686 Oct 17 18:59:05 server1 glance-api[2828]: Traceback (most recent call last): Oct 17 18:59:05 server1 glance-api[2828]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 336, in fire_timers Oct 17 18:59:05 server1 glance-api[2828]: timer() Oct 17 18:59:05 server1 glance-api[2828]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 56, in __call__ Oct 17 18:59:05 server1 glance-api[2828]: cb(*args, **kw) Oct 17 18:59:05 server1 glance-api[2828]: SystemError: error return without exception set Oct 17 19:01:01 server1 systemd-logind[669]: New session 5 of user root. Oct 17 19:01:01 server1 systemd-logind[669]: Removed session 5. Oct 17 19:02:57 server1 glance-registry[2888]: Traceback (most recent call last): Oct 17 19:02:57 server1 glance-registry[2888]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 336, in fire_timers Oct 17 19:02:57 server1 glance-registry[2888]: timer() Oct 17 19:02:57 server1 glance-registry[2888]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 56, in __call__ Oct 17 19:02:57 server1 glance-registry[2888]: cb(*args, **kw) Oct 17 19:02:57 server1 glance-registry[2888]: SystemError: error return without exception set Now try to start nova: $> [root at server1 ~]# service openstack-nova-api start Redirecting to /bin/systemctl start openstack-nova-api.service Oct 17 19:05:10 server1 kernel: [ 8053.797011] nova-api[2919]: segfault at 4 ip 003eefc0 sp bff3c9b8 error 4 in libc-2.14.90.so[2aa000+1a6000] Oct 17 19:05:10 server1 systemd[1]: openstack-nova-api.service: main process exited, code=killed, status=11 Oct 17 19:05:10 server1 systemd[1]: Unit openstack-nova-api.service entered failed state. $> [root at server1 ~]# service openstack-nova-volume status Redirecting to /bin/systemctl status openstack-nova-volume.service openstack-nova-volume.service - OpenStack Nova Volume Server Loaded: loaded (/lib/systemd/system/openstack-nova-volume.service; disabled) Active: active (running) since Mon, 17 Oct 2011 19:08:40 +0300; 15s ago Main PID: 3052 (nova-volume) CGroup: name=systemd:/system/openstack-nova-volume.service ? 3052 /usr/bin/python /usr/bin/nova-volume --flagfile /etc/nova/nova.conf --logfile /var/log/nova/volume.log Only nova-volume starts, the rest services have segfault: Oct 17 19:07:55 server1 kernel: [ 8218.917047] nova-compute[2978]: segfault at bf856000 ip 00255d19 sp bf8538d8 error 6 in libc-2.14.90.so[110000+1a6000] Oct 17 19:07:55 server1 systemd[1]: openstack-nova-compute.service: main process exited, code=killed, status=11 Oct 17 19:07:55 server1 systemd[1]: Unit openstack-nova-compute.service entered failed state. Oct 17 19:08:25 server1 kernel: [ 8248.830936] nova-network[3036]: segfault at bfbaf000 ip 00f3ad3b sp bfbabae8 error 6 in libc-2.14.90.so[df5000+1a6000] Oct 17 19:08:25 server1 systemd[1]: openstack-nova-network.service: main process exited, code=killed, status=11 Oct 17 19:08:25 server1 systemd[1]: Unit openstack-nova-network.service entered failed state. Oct 17 19:08:40 server1 nova-volume[3052]: Traceback (most recent call last): Oct 17 19:08:40 server1 nova-volume[3052]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 336, in fire_timers Oct 17 19:08:40 server1 nova-volume[3052]: timer() Oct 17 19:08:40 server1 nova-volume[3052]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 56, in __call__ Oct 17 19:08:40 server1 nova-volume[3052]: cb(*args, **kw) Oct 17 19:08:40 server1 nova-volume[3052]: SystemError: error return without exception set Oct 17 19:10:32 server1 kernel: [ 8375.977931] nova-scheduler[3091]: segfault at bfdf0000 ip 00b3bdcc sp bfdecc50 error 6 in libc-2.14.90.so[9f6000+1a6000] Oct 17 19:10:32 server1 systemd[1]: openstack-nova-scheduler.service: main process exited, code=killed, status=11 Oct 17 19:10:32 server1 systemd[1]: Unit openstack-nova-scheduler.service entered failed state. By the way, the same happens with glibc-2.14.90-12.i686 Thanks, Sergey From Till.Mossakowski at dfki.de Wed Oct 19 16:04:04 2011 From: Till.Mossakowski at dfki.de (Till Mossakowski) Date: Wed, 19 Oct 2011 18:04:04 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long Message-ID: <4E9EF4F4.2080406@dfki.de> Hi, I have set up openstack using stackops. I have installed one controller node and one compute node (each using two 1GBit NICs), following the book "Deploying Openstack". I am using nova-objectstore for storing images. Now starting a machine with a 5G image takes quite a while, probably because the image is mounted via nfs to the compute node. With libvirt, I am used to start VMs instantly. Is there a way to do the same with openstack? The image would need to be stored directly on the compute node, of course. Ideally, in a network with more nodes, the lots of images I have would be distributed to compute nodes in advance, and a special scheduler would select a compute node holding the needed image. Is this possible with openstack? Best, Till -- Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 DFKI GmbH Bremen Fax +49-421-218-9864226 Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH principal office, *not* the address for mail etc.!!!: Trippstadter Str. 122, D-67663 Kaiserslautern management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff supervisory board: Prof. Hans A. Aukes (chair) Amtsgericht Kaiserslautern, HRB 2313 From diego.parrilla at stackops.com Wed Oct 19 16:39:36 2011 From: diego.parrilla at stackops.com (Diego Parrilla) Date: Wed, 19 Oct 2011 18:39:36 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: <4E9EF4F4.2080406@dfki.de> References: <4E9EF4F4.2080406@dfki.de> Message-ID: Hi, my answers below, On Wed, Oct 19, 2011 at 6:04 PM, Till Mossakowski wrote: > Hi, > > I have set up openstack using stackops. > Good choice ;-) > I have installed one controller node and one compute node (each using two > 1GBit NICs), following the book "Deploying Openstack". > I am using nova-objectstore for storing images. > > Now starting a machine with a 5G image takes quite a while, probably > because the image is mounted via nfs to the compute node. > 5GB image it's not too big... we use NFS to share instances among nodes to help with the live migration and performance it's acceptable. How much is 'quite a while' in seconds? > > With libvirt, I am used to start VMs instantly. Is there a way to do the > same with openstack? The image would need to be stored directly on the > compute node, of course. Ideally, in a network with more nodes, the lots of > images I have would be distributed to compute nodes in advance, and a > special scheduler would select a compute node holding the needed image. Is > this possible with openstack? > If you share the /var/lib/nova/instances with NFS, during the 'launch' process the base virtual image is copied to '_base'. Depending on the size of this file it will take longer. Once it's copied next time you use this image it should go much faster. Note: I have tested right now with a 1Gb launching a >25GB Windows VM and it took 3-4 minutes the first time. New Windows images, it took only a few seconds. > > Best, > Till > > -- > Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 > DFKI GmbH Bremen Fax +49-421-218-9864226 > Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de > Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till > > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > principal office, *not* the address for mail etc.!!!: > Trippstadter Str. 122, D-67663 Kaiserslautern > management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff > supervisory board: Prof. Hans A. Aukes (chair) > Amtsgericht Kaiserslautern, HRB 2313 > ______________________________**_________________ > Openstack-operators mailing list > Openstack-operators at lists.**openstack.org > http://lists.openstack.org/**cgi-bin/mailman/listinfo/** > openstack-operators > -- Diego Parrilla *CEO* *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * * * ******************** ADVERTENCIA LEGAL ******************** Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. ***************** PRIVILEGED AND CONFIDENTIAL **************** We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Till.Mossakowski at dfki.de Wed Oct 19 18:12:08 2011 From: Till.Mossakowski at dfki.de (Till Mossakowski) Date: Wed, 19 Oct 2011 20:12:08 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: References: <4E9EF4F4.2080406@dfki.de> Message-ID: <4E9F12F8.30207@dfki.de> Hi, > my answers below, many thanks for your quick answer. > I have set up openstack using stackops. > > > Good choice ;-) Yes, the stackops GUI is very nice. However, stackops is based on cactus, right? Is there a way of using diablo with stackops? Perhaps it is possible to upgrade the Ubuntu lucid distro that is coming with stackops to natty or oneiric and then upgrade to diablo using the source ppa:openstack-release/2011.3 for openstack? > 5GB image it's not too big... we use NFS to share instances among nodes > to help with the live migration and performance it's acceptable. How > much is 'quite a while' in seconds? between half a minute and a minute (I haven't taken the exact time...). This is too long for our users. > If you share the /var/lib/nova/instances with NFS, during the 'launch' > process the base virtual image is copied to '_base'. Depending on the > size of this file it will take longer. Once it's copied next time you > use this image it should go much faster. > > Note: I have tested right now with a 1Gb launching a >25GB Windows VM > and it took 3-4 minutes the first time. New Windows images, it took only > a few seconds. This is interesting. Is there a way of telling the scheduler to prefer a compute node that already has copied the needed image? Best, Till -- Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 DFKI GmbH Bremen Fax +49-421-218-9864226 Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH principal office, *not* the address for mail etc.!!!: Trippstadter Str. 122, D-67663 Kaiserslautern management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff supervisory board: Prof. Hans A. Aukes (chair) Amtsgericht Kaiserslautern, HRB 2313 From diego.parrilla at stackops.com Thu Oct 20 08:53:40 2011 From: diego.parrilla at stackops.com (Diego Parrilla) Date: Thu, 20 Oct 2011 10:53:40 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: <4E9F12F8.30207@dfki.de> References: <4E9EF4F4.2080406@dfki.de> <4E9F12F8.30207@dfki.de> Message-ID: Hi, my answers below. On Wed, Oct 19, 2011 at 8:12 PM, Till Mossakowski wrote: > Hi, > > my answers below, >> > > many thanks for your quick answer. > > > I have set up openstack using stackops. >> >> >> Good choice ;-) >> > > Yes, the stackops GUI is very nice. However, stackops is based on cactus, > right? Is there a way of using diablo with stackops? Perhaps it is possible > to upgrade the Ubuntu lucid distro that is coming with stackops to natty or > oneiric and then upgrade to diablo using the source > ppa:openstack-release/2011.3 for openstack? Yes, the 0.3 version with Diablo release is coming. We detected some QA issues. But things are working much better now. > > > 5GB image it's not too big... we use NFS to share instances among nodes >> to help with the live migration and performance it's acceptable. How >> much is 'quite a while' in seconds? >> > > between half a minute and a minute (I haven't taken the exact time...). > This is too long for our users. If the virtual disks are cached, launching a 40GB virtual machine takes less than 5 seconds in our test platform (IBM x3550M3 Dual Xeon 5620 64GB with NFS as shared storage on 1Gb) > > > If you share the /var/lib/nova/instances with NFS, during the 'launch' >> process the base virtual image is copied to '_base'. Depending on the >> size of this file it will take longer. Once it's copied next time you >> use this image it should go much faster. >> >> Note: I have tested right now with a 1Gb launching a >25GB Windows VM >> and it took 3-4 minutes the first time. New Windows images, it took only >> a few seconds. >> > > This is interesting. Is there a way of telling the scheduler to prefer a > compute node that already has copied the needed image? Try this: 1) Configure the compute nodes to use a shared directory with NFS on /var/lib/nova/instances 2) Launch ALL the virtual disks you need at runtime. It will take a while the first time. 3) Virtual disks are now cached in /var/lib/nova/instances/_base 4) Try to launch now the virtual disks again. They should start very fast. If you need some kind of assistance, please let me know. Regards Diego > > > Best, Till > > -- > Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 > DFKI GmbH Bremen Fax +49-421-218-9864226 > Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de > Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till > > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > principal office, *not* the address for mail etc.!!!: > Trippstadter Str. 122, D-67663 Kaiserslautern > management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff > supervisory board: Prof. Hans A. Aukes (chair) > Amtsgericht Kaiserslautern, HRB 2313 > -- Diego Parrilla *CEO* *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * * * ******************** ADVERTENCIA LEGAL ******************** Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. ***************** PRIVILEGED AND CONFIDENTIAL **************** We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris-michel.deschenes at ubisoft.com Thu Oct 20 13:47:02 2011 From: boris-michel.deschenes at ubisoft.com (Boris-Michel Deschenes) Date: Thu, 20 Oct 2011 09:47:02 -0400 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: References: <4E9EF4F4.2080406@dfki.de> <4E9F12F8.30207@dfki.de> Message-ID: Hi guys, Just a quick note, I had this setup at some point (NFS-mounted /var/lib/nova/instances) which is essential to get live VM migrations if I'm not mistaken (live migration was working perfectly). The problem I had with this setup was that the VM startup time was considerably slower than when the images were residing on a local disk (and I mean, even after all images are "cached"). Basically an image will start the fastest when it is cached locally (local drive) Then, not quite as fast when cached but on a NFS-mounted directory Then really slowly when residing entirely on another disk and needed to be written locally to be cached These are the observations I made but I realize other factors weigh in (SAS vs SATA disk, network speed, etc.) Please advise if you get the same speed in NFS-cached vs local-cached setup as it might convince me to go back to an NFS share (also were you using SAS disks to serve the NFS?). Thanks De : openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] De la part de Diego Parrilla Envoy? : 20 octobre 2011 04:54 ? : Till Mossakowski Cc : openstack-operators at lists.openstack.org Objet : Re: [Openstack-operators] Starting large VMs takes quite long Hi, my answers below. On Wed, Oct 19, 2011 at 8:12 PM, Till Mossakowski > wrote: Hi, my answers below, many thanks for your quick answer. I have set up openstack using stackops. Good choice ;-) Yes, the stackops GUI is very nice. However, stackops is based on cactus, right? Is there a way of using diablo with stackops? Perhaps it is possible to upgrade the Ubuntu lucid distro that is coming with stackops to natty or oneiric and then upgrade to diablo using the source ppa:openstack-release/2011.3 for openstack? Yes, the 0.3 version with Diablo release is coming. We detected some QA issues. But things are working much better now. 5GB image it's not too big... we use NFS to share instances among nodes to help with the live migration and performance it's acceptable. How much is 'quite a while' in seconds? between half a minute and a minute (I haven't taken the exact time...). This is too long for our users. If the virtual disks are cached, launching a 40GB virtual machine takes less than 5 seconds in our test platform (IBM x3550M3 Dual Xeon 5620 64GB with NFS as shared storage on 1Gb) If you share the /var/lib/nova/instances with NFS, during the 'launch' process the base virtual image is copied to '_base'. Depending on the size of this file it will take longer. Once it's copied next time you use this image it should go much faster. Note: I have tested right now with a 1Gb launching a >25GB Windows VM and it took 3-4 minutes the first time. New Windows images, it took only a few seconds. This is interesting. Is there a way of telling the scheduler to prefer a compute node that already has copied the needed image? Try this: 1) Configure the compute nodes to use a shared directory with NFS on /var/lib/nova/instances 2) Launch ALL the virtual disks you need at runtime. It will take a while the first time. 3) Virtual disks are now cached in /var/lib/nova/instances/_base 4) Try to launch now the virtual disks again. They should start very fast. If you need some kind of assistance, please let me know. Regards Diego Best, Till -- Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 DFKI GmbH Bremen Fax +49-421-218-9864226 Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH principal office, *not* the address for mail etc.!!!: Trippstadter Str. 122, D-67663 Kaiserslautern management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff supervisory board: Prof. Hans A. Aukes (chair) Amtsgericht Kaiserslautern, HRB 2313 -- Diego Parrilla CEO www.stackops.com | diego.parrilla at stackops.com | +34 649 94 43 29 | skype:diegoparrilla [cid:~WRD000.jpg] ******************** ADVERTENCIA LEGAL ******************** Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. ***************** PRIVILEGED AND CONFIDENTIAL **************** We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ~WRD000.jpg Type: image/jpeg Size: 823 bytes Desc: ~WRD000.jpg URL: From diego.parrilla at stackops.com Thu Oct 20 14:03:31 2011 From: diego.parrilla at stackops.com (Diego Parrilla) Date: Thu, 20 Oct 2011 16:03:31 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: References: <4E9EF4F4.2080406@dfki.de> <4E9F12F8.30207@dfki.de> Message-ID: Hi, my answers below, On Thu, Oct 20, 2011 at 3:47 PM, Boris-Michel Deschenes < boris-michel.deschenes at ubisoft.com> wrote: > Hi guys,**** > > ** ** > > Just a quick note, I had this setup at some point (NFS-mounted > /var/lib/nova/instances) which is essential to get live VM migrations if I?m > not mistaken (live migration was working perfectly). The problem I had with > this setup was that the VM startup time was considerably slower than when > the images were residing on a local disk (and I mean, even after all images > are ?cached?). > It's true. The fastest disk and the closest to the drive the better. > **** > > ** ** > > Basically an image will start the fastest when it is cached locally (local > drive) > Correct. > **** > > Then, not quite as fast when cached but on a NFS-mounted directory > Correct. It takes some time to create the local disks. It's very important to have a good connection to the shared file system (it's not mandatory to use NFS). > **** > > Then really slowly when residing entirely on another disk and needed to be > written locally to be cached. > Right, it can take several minutes on a 1Gb. > **** > > ** ** > > These are the observations I made but I realize other factors weigh in (SAS > vs SATA disk, network speed, etc.) Please advise if you get the same speed > in NFS-cached vs local-cached setup as it might convince me to go back to an > NFS share (also were you using SAS disks to serve the NFS?). > No, the performance on local disk is much higher than running a NFS on a 1Gb. For my perspective not only live migration is a must for our customers, but also the local virtual disks must persists a catastrophic failure of a nova-compute. That's the reason why recommend 10Gb and a good performant NFS file server connected. 15K or 10K SAS is not so relevant, the bottleneck is the network (speed and latency). There are also good solutions combining 10Gb + SSD Cache disks + 7.2KRPM SAS/SATA disks. I would like to know what the people are using in real life deployments. Any more thoughts? Regards Diego > **** > > ** ** > > Thanks**** > > ** ** > > *De :* openstack-operators-bounces at lists.openstack.org [mailto: > openstack-operators-bounces at lists.openstack.org] *De la part de* Diego > Parrilla > *Envoy? :* 20 octobre 2011 04:54 > *? :* Till Mossakowski > *Cc :* openstack-operators at lists.openstack.org > *Objet :* Re: [Openstack-operators] Starting large VMs takes quite long*** > * > > ** ** > > Hi, **** > > ** ** > > my answers below.**** > > ** ** > > On Wed, Oct 19, 2011 at 8:12 PM, Till Mossakowski < > Till.Mossakowski at dfki.de> wrote:**** > > Hi,**** > > my answers below,**** > > > many thanks for your quick answer.**** > > ** ** > > I have set up openstack using stackops. > > > Good choice ;-)**** > > ** ** > > Yes, the stackops GUI is very nice. However, stackops is based on cactus, > right? Is there a way of using diablo with stackops? Perhaps it is possible > to upgrade the Ubuntu lucid distro that is coming with stackops to natty or > oneiric and then upgrade to diablo using the source > ppa:openstack-release/2011.3 for openstack?**** > > ** ** > > Yes, the 0.3 version with Diablo release is coming. We detected some QA > issues. But things are working much better now.**** > > **** > > ** ** > > 5GB image it's not too big... we use NFS to share instances among nodes > to help with the live migration and performance it's acceptable. How > much is 'quite a while' in seconds?**** > > ** ** > > between half a minute and a minute (I haven't taken the exact time...). > This is too long for our users.**** > > ** ** > > If the virtual disks are cached, launching a 40GB virtual machine takes > less than 5 seconds in our test platform (IBM x3550M3 Dual Xeon 5620 64GB > with NFS as shared storage on 1Gb)**** > > **** > > ** ** > > If you share the /var/lib/nova/instances with NFS, during the 'launch' > process the base virtual image is copied to '_base'. Depending on the > size of this file it will take longer. Once it's copied next time you > use this image it should go much faster. > > Note: I have tested right now with a 1Gb launching a >25GB Windows VM > and it took 3-4 minutes the first time. New Windows images, it took only > a few seconds.**** > > ** ** > > This is interesting. Is there a way of telling the scheduler to prefer a > compute node that already has copied the needed image?**** > > ** ** > > Try this:**** > > ** ** > > 1) Configure the compute nodes to use a shared directory with NFS on > /var/lib/nova/instances**** > > 2) Launch ALL the virtual disks you need at runtime. It will take a while > the first time.**** > > 3) Virtual disks are now cached in /var/lib/nova/instances/_base**** > > 4) Try to launch now the virtual disks again. They should start very fast. > **** > > ** ** > > If you need some kind of assistance, please let me know.**** > > ** ** > > Regards**** > > Diego**** > > **** > > > > Best, Till > > -- > Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 > DFKI GmbH Bremen Fax +49-421-218-9864226 > Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de > Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till > > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > principal office, *not* the address for mail etc.!!!: > Trippstadter Str. 122, D-67663 Kaiserslautern > management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff > supervisory board: Prof. Hans A. Aukes (chair) > Amtsgericht Kaiserslautern, HRB 2313**** > > ** ** > > > -- **** > > Diego Parrilla > *CEO* > *www.stackops.com | * diego.parrilla at stackops.com | +34 649 94 43 29| skype:diegoparrilla > * > * **** > > *[image: Description : Image supprim?e par l'exp?diteur.]*** > > ******************** ADVERTENCIA LEGAL ******************** > Le informamos, como destinatario de este mensaje, que el correo electr?nico > y las comunicaciones por medio de Internet no permiten asegurar ni > garantizar la confidencialidad de los mensajes transmitidos, as? como > tampoco su integridad o su correcta recepci?n, por lo que STACKOPS > TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. > Si no consintiese en la utilizaci?n del correo electr?nico o de las > comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro > conocimiento de manera inmediata. Este mensaje va dirigido, de manera > exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al > secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso > de haber recibido este mensaje por error, le rogamos que, de forma > inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra > atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento > adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o > utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, > cualquiera que fuera su finalidad, est?n prohibidas por la ley. > > ***************** PRIVILEGED AND CONFIDENTIAL **************** > We hereby inform you, as addressee of this message, that e-mail and > Internet do not guarantee the confidentiality, nor the completeness or > proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. > does not assume any liability for those circumstances. Should you not agree > to the use of e-mail or to communications via Internet, you are kindly > requested to notify us immediately. This message is intended exclusively for > the person to whom it is addressed and contains privileged and confidential > information protected from disclosure by law. If you are not the addressee > indicated in this message, you should immediately delete it and any > attachments and notify the sender by reply e-mail. In such case, you are > hereby notified that any dissemination, distribution, copying or use of this > message or any attachments, for any purpose, is strictly prohibited by law. > **** > > ** ** > > ** ** > -- Diego Parrilla *CEO* *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * * * ******************** ADVERTENCIA LEGAL ******************** Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. ***************** PRIVILEGED AND CONFIDENTIAL **************** We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.O'Loughlin at surrey.ac.uk Thu Oct 20 15:14:19 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Thu, 20 Oct 2011 16:14:19 +0100 Subject: [Openstack-operators] swift proxy server problem Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C92@EXMB01CMS.surrey.ac.uk> Hi All, I'm trying to set up swift and am having an issue with getting the proxy service to start, after a swift-init proxy start the proxy does not start and I see this in the logs: Oct 20 16:12:14 storage05 proxy-server UNCAUGHT EXCEPTION#012Traceback (most recent call last):#012 File "/usr/bin/swift-proxy-server", line 22, in #012 run_wsgi(conf_file, 'proxy-server', default_port=8080, **options)#012 File "/usr/lib/pymodules/python2.6/swift/common/wsgi.py", line 126, in run_wsgi#012 app = loadapp('config:%s' % conf_file, global_conf={'log_name': log_name})#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 204, in loadapp#012 return loadobj(APP, uri, name=name, **kw)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 224, in loadobj#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 278, in _loadconfig#012 return loader.get_context(object_type, name, global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 405, in get_context#012 global_additions=global_additions)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 503, in _pipeline_app_context#012 for name in pipeline[:-1]]#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 409, in get_context#012 section)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 431, in _context_from_use#012 object_type, name=use, global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 361, in get_context#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 285, in _loadegg#012 return loader.get_context(object_type, name, global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 561, in get_context#012 object_type, name=name)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 587, Any help appreciated. Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From deJongm at TEOCO.com Thu Oct 20 18:00:07 2011 From: deJongm at TEOCO.com (de Jong, Mark-Jan) Date: Thu, 20 Oct 2011 14:00:07 -0400 Subject: [Openstack-operators] nova-network assigned IP address Message-ID: <5E3DCAE61C95FA4397679425D7275D26055C296B3B@HQ-MX03.us.teo.earth> Hello, Is there a way to assign an IP address to nova-network other than the default gateway of the network? I want my guests to be directly connected to the "public" network and don't want nova-network to act as my router. I just need it for DHCP. Is this possible? Thanks! ,.,.,.,..,...,.,..,..,..,...,....,..,..,....,.... Mark-Jan de Jong O | 703-259-4406 C | 703-254-6284 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacampa at gmv.com Fri Oct 21 06:30:32 2011 From: sacampa at gmv.com (Sergio Ariel de la Campa Saiz) Date: Fri, 21 Oct 2011 08:30:32 +0200 Subject: [Openstack-operators] swift proxy server problem In-Reply-To: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C92@EXMB01CMS.surrey.ac.uk> References: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C92@EXMB01CMS.surrey.ac.uk> Message-ID: <947E2550A3F9C740936DDCC9936667B901BE41AEC50A@GMVMAIL4.gmv.es> I suggest you to send the configuration files. Sergio Ariel de la Campa Saiz Ingeniero de Infraestructuras / Infrastucture Engineer / GMV Isaac Newton, 11 P.T.M. Tres Cantos E-28760 Madrid Tel. +34 91 807 21 00 Fax +34 91 807 21 99 www.gmv.com ? ? ? -----Mensaje original----- De: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] En nombre de J.O'Loughlin at surrey.ac.uk Enviado el: jueves, 20 de octubre de 2011 17:14 Para: openstack-operators at lists.openstack.org Asunto: [Openstack-operators] swift proxy server problem Hi All, I'm trying to set up swift and am having an issue with getting the proxy service to start, after a swift-init proxy start the proxy does not start and I see this in the logs: Oct 20 16:12:14 storage05 proxy-server UNCAUGHT EXCEPTION#012Traceback (most recent call last):#012 File "/usr/bin/swift-proxy-server", line 22, in #012 run_wsgi(conf_file, 'proxy-server', default_port=8080, **options)#012 File "/usr/lib/pymodules/python2.6/swift/common/wsgi.py", line 126, in run_wsgi#012 app = loadapp('config:%s' % conf_file, global_conf={'log_name': log_name})#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 204, in loadapp#012 return loadobj(APP, uri, name=name, **kw)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 224, in loadobj#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 278, in _loadconfig#012 return loader.get_context(object_type, name, global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/l oadwsgi.py", line 405, in get_context#012 global_additions=global_additions)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 503, in _pipeline_app_context#012 for name in pipeline[:-1]]#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 409, in get_context#012 section)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 431, in _context_from_use#012 object_type, name=use, global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 361, in get_context#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 285, in _loadegg#012 return loader.get_context(object_type, name, global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 561, in get_context#012 object_type , name=name)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 587, Any help appreciated. Regards John O'Loughlin FEPS IT, Service Delivery Team Leader _______________________________________________ Openstack-operators mailing list Openstack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ______________________ This message including any attachments may contain confidential information, according to our Information Security Management System, and intended solely for a specific individual to whom they are addressed. Any unauthorised copy, disclosure or distribution of this message is strictly forbidden. If you have received this transmission in error, please notify the sender immediately and delete it. ______________________ Este mensaje, y en su caso, cualquier fichero anexo al mismo, puede contener informacion clasificada por su emisor como confidencial en el marco de su Sistema de Gestion de Seguridad de la Informacion siendo para uso exclusivo del destinatario, quedando prohibida su divulgacion copia o distribucion a terceros sin la autorizacion expresa del remitente. Si Vd. ha recibido este mensaje erroneamente, se ruega lo notifique al remitente y proceda a su borrado. Gracias por su colaboracion. ______________________ From ghe.rivero at gmail.com Fri Oct 21 12:17:56 2011 From: ghe.rivero at gmail.com (ghe. rivero) Date: Fri, 21 Oct 2011 14:17:56 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: References: <4E9EF4F4.2080406@dfki.de> <4E9F12F8.30207@dfki.de> Message-ID: Hi, Talking about live-migration and shared mount points, has anyone have the chance to try glusterfs connector? They claim to be able to: "Instantly boot VMs using a mountable filesystem interface ? no more fetching the entire VM image before booting" ( http://www.gluster.com/2011/07/27/glusters-shiny-new-connector-for-openstack/ ) See you! Ghe Rivero On Thu, Oct 20, 2011 at 4:03 PM, Diego Parrilla wrote: > Hi, > > my answers below, > > On Thu, Oct 20, 2011 at 3:47 PM, Boris-Michel Deschenes < > boris-michel.deschenes at ubisoft.com> wrote: > >> Hi guys,**** >> >> ** ** >> >> Just a quick note, I had this setup at some point (NFS-mounted >> /var/lib/nova/instances) which is essential to get live VM migrations if I?m >> not mistaken (live migration was working perfectly). The problem I had with >> this setup was that the VM startup time was considerably slower than when >> the images were residing on a local disk (and I mean, even after all images >> are ?cached?). >> > > It's true. The fastest disk and the closest to the drive the better. > > >> **** >> >> ** ** >> >> Basically an image will start the fastest when it is cached locally (local >> drive) >> > > Correct. > > >> **** >> >> Then, not quite as fast when cached but on a NFS-mounted directory >> > > Correct. It takes some time to create the local disks. It's very important > to have a good connection to the shared file system (it's not mandatory to > use NFS). > > >> **** >> >> Then really slowly when residing entirely on another disk and needed to be >> written locally to be cached. >> > > Right, it can take several minutes on a 1Gb. > > >> **** >> >> ** ** >> >> These are the observations I made but I realize other factors weigh in >> (SAS vs SATA disk, network speed, etc.) Please advise if you get the same >> speed in NFS-cached vs local-cached setup as it might convince me to go back >> to an NFS share (also were you using SAS disks to serve the NFS?). >> > > No, the performance on local disk is much higher than running a NFS on a > 1Gb. For my perspective not only live migration is a must for our customers, > but also the local virtual disks must persists a catastrophic failure of a > nova-compute. That's the reason why recommend 10Gb and a good performant NFS > file server connected. 15K or 10K SAS is not so relevant, the bottleneck is > the network (speed and latency). There are also good solutions combining > 10Gb + SSD Cache disks + 7.2KRPM SAS/SATA disks. > > I would like to know what the people are using in real life deployments. > Any more thoughts? > > Regards > Diego > > >> **** >> >> ** ** >> >> Thanks**** >> >> ** ** >> >> *De :* openstack-operators-bounces at lists.openstack.org [mailto: >> openstack-operators-bounces at lists.openstack.org] *De la part de* Diego >> Parrilla >> *Envoy? :* 20 octobre 2011 04:54 >> *? :* Till Mossakowski >> *Cc :* openstack-operators at lists.openstack.org >> *Objet :* Re: [Openstack-operators] Starting large VMs takes quite long** >> ** >> >> ** ** >> >> Hi, **** >> >> ** ** >> >> my answers below.**** >> >> ** ** >> >> On Wed, Oct 19, 2011 at 8:12 PM, Till Mossakowski < >> Till.Mossakowski at dfki.de> wrote:**** >> >> Hi,**** >> >> my answers below,**** >> >> >> many thanks for your quick answer.**** >> >> ** ** >> >> I have set up openstack using stackops. >> >> >> Good choice ;-)**** >> >> ** ** >> >> Yes, the stackops GUI is very nice. However, stackops is based on cactus, >> right? Is there a way of using diablo with stackops? Perhaps it is possible >> to upgrade the Ubuntu lucid distro that is coming with stackops to natty or >> oneiric and then upgrade to diablo using the source >> ppa:openstack-release/2011.3 for openstack?**** >> >> ** ** >> >> Yes, the 0.3 version with Diablo release is coming. We detected some QA >> issues. But things are working much better now.**** >> >> **** >> >> ** ** >> >> 5GB image it's not too big... we use NFS to share instances among nodes >> to help with the live migration and performance it's acceptable. How >> much is 'quite a while' in seconds?**** >> >> ** ** >> >> between half a minute and a minute (I haven't taken the exact time...). >> This is too long for our users.**** >> >> ** ** >> >> If the virtual disks are cached, launching a 40GB virtual machine takes >> less than 5 seconds in our test platform (IBM x3550M3 Dual Xeon 5620 64GB >> with NFS as shared storage on 1Gb)**** >> >> **** >> >> ** ** >> >> If you share the /var/lib/nova/instances with NFS, during the 'launch' >> process the base virtual image is copied to '_base'. Depending on the >> size of this file it will take longer. Once it's copied next time you >> use this image it should go much faster. >> >> Note: I have tested right now with a 1Gb launching a >25GB Windows VM >> and it took 3-4 minutes the first time. New Windows images, it took only >> a few seconds.**** >> >> ** ** >> >> This is interesting. Is there a way of telling the scheduler to prefer a >> compute node that already has copied the needed image?**** >> >> ** ** >> >> Try this:**** >> >> ** ** >> >> 1) Configure the compute nodes to use a shared directory with NFS on >> /var/lib/nova/instances**** >> >> 2) Launch ALL the virtual disks you need at runtime. It will take a while >> the first time.**** >> >> 3) Virtual disks are now cached in /var/lib/nova/instances/_base**** >> >> 4) Try to launch now the virtual disks again. They should start very fast. >> **** >> >> ** ** >> >> If you need some kind of assistance, please let me know.**** >> >> ** ** >> >> Regards**** >> >> Diego**** >> >> **** >> >> >> >> Best, Till >> >> -- >> Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 >> DFKI GmbH Bremen Fax +49-421-218-9864226 >> Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de >> Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till >> >> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH >> principal office, *not* the address for mail etc.!!!: >> Trippstadter Str. 122, D-67663 Kaiserslautern >> management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff >> supervisory board: Prof. Hans A. Aukes (chair) >> Amtsgericht Kaiserslautern, HRB 2313**** >> >> ** ** >> >> >> -- **** >> >> Diego Parrilla >> *CEO* >> *www.stackops.com | * diego.parrilla at stackops.com | +34 649 94 43 29| skype:diegoparrilla >> * >> * **** >> >> *[image: Description : Image supprim?e par l'exp?diteur.]*** >> >> ******************** ADVERTENCIA LEGAL ******************** >> Le informamos, como destinatario de este mensaje, que el correo >> electr?nico y las comunicaciones por medio de Internet no permiten asegurar >> ni garantizar la confidencialidad de los mensajes transmitidos, as? como >> tampoco su integridad o su correcta recepci?n, por lo que STACKOPS >> TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. >> Si no consintiese en la utilizaci?n del correo electr?nico o de las >> comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro >> conocimiento de manera inmediata. Este mensaje va dirigido, de manera >> exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al >> secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso >> de haber recibido este mensaje por error, le rogamos que, de forma >> inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra >> atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento >> adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o >> utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, >> cualquiera que fuera su finalidad, est?n prohibidas por la ley. >> >> ***************** PRIVILEGED AND CONFIDENTIAL **************** >> We hereby inform you, as addressee of this message, that e-mail and >> Internet do not guarantee the confidentiality, nor the completeness or >> proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. >> does not assume any liability for those circumstances. Should you not agree >> to the use of e-mail or to communications via Internet, you are kindly >> requested to notify us immediately. This message is intended exclusively for >> the person to whom it is addressed and contains privileged and confidential >> information protected from disclosure by law. If you are not the addressee >> indicated in this message, you should immediately delete it and any >> attachments and notify the sender by reply e-mail. In such case, you are >> hereby notified that any dissemination, distribution, copying or use of this >> message or any attachments, for any purpose, is strictly prohibited by law. >> **** >> >> ** ** >> >> ** ** >> > > > -- > Diego Parrilla > *CEO* > *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 | > skype:diegoparrilla* > * > * > > * > > ******************** ADVERTENCIA LEGAL ******************** > Le informamos, como destinatario de este mensaje, que el correo electr?nico > y las comunicaciones por medio de Internet no permiten asegurar ni > garantizar la confidencialidad de los mensajes transmitidos, as? como > tampoco su integridad o su correcta recepci?n, por lo que STACKOPS > TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. > Si no consintiese en la utilizaci?n del correo electr?nico o de las > comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro > conocimiento de manera inmediata. Este mensaje va dirigido, de manera > exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al > secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso > de haber recibido este mensaje por error, le rogamos que, de forma > inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra > atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento > adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o > utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, > cualquiera que fuera su finalidad, est?n prohibidas por la ley. > > ***************** PRIVILEGED AND CONFIDENTIAL **************** > We hereby inform you, as addressee of this message, that e-mail and > Internet do not guarantee the confidentiality, nor the completeness or > proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. > does not assume any liability for those circumstances. Should you not agree > to the use of e-mail or to communications via Internet, you are kindly > requested to notify us immediately. This message is intended exclusively for > the person to whom it is addressed and contains privileged and confidential > information protected from disclosure by law. If you are not the addressee > indicated in this message, you should immediately delete it and any > attachments and notify the sender by reply e-mail. In such case, you are > hereby notified that any dissemination, distribution, copying or use of this > message or any attachments, for any purpose, is strictly prohibited by law. > > > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- .''`. Pienso, Luego Incordio : :' : `. `' `- www.debian.org www.hispalinux.es GPG Key: 26F020F7 GPG fingerprint: 4986 39DA D152 050B 4699 9A71 66DB 5A36 26F0 20F7 -------------- next part -------------- An HTML attachment was scrubbed... URL: From borsodp at staff.westminster.ac.uk Fri Oct 21 12:57:10 2011 From: borsodp at staff.westminster.ac.uk (Peter Borsody) Date: Fri, 21 Oct 2011 13:57:10 +0100 Subject: [Openstack-operators] nova-network assigned IP address In-Reply-To: <5E3DCAE61C95FA4397679425D7275D26055C296B3B@HQ-MX03.us.teo.earth> References: <5E3DCAE61C95FA4397679425D7275D26055C296B3B@HQ-MX03.us.teo.earth> Message-ID: Hi, I had the exactly same problem.So, I patched the nova source code to work, added some option to the point of dnsmasq managing. Cheers, Peter On 20 October 2011 19:00, de Jong, Mark-Jan wrote: > Hello, > > Is there a way to assign an IP address to nova-network other than the > default gateway of the network? I want my guests to be directly connected to > the ?public? network and don?t want nova-network to act as my router. I just > need it for DHCP. Is this possible? > > > > Thanks! > > > > ,.,.,.,..,...,.,..,..,..,...,....,..,..,....,.... > > Mark-Jan de Jong > > O | 703-259-4406 > > C | 703-254-6284 > > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wittwerch at gmail.com Fri Oct 21 14:58:47 2011 From: wittwerch at gmail.com (Christian Wittwer) Date: Fri, 21 Oct 2011 16:58:47 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: References: <4E9EF4F4.2080406@dfki.de> <4E9F12F8.30207@dfki.de> Message-ID: "The Gluster Connector for OpenStack", it's ridiculous. Have a look at the docs, they've done nothing concering Openstack compute. => http://www.gluster.com/wp-content/uploads/2011/07/Gluster-Openstack-VM-storage-v1-shehjar.pdf They just create a normal gluster volume and store the vms on it. That was even possible before, ?I had that setup running long before. Christian > 2011/10/21 ghe. rivero >> >> Hi, >> Talking about live-migration and shared mount points, has anyone have the chance to try glusterfs connector? They claim to be able to: "Instantly boot VMs using a mountable filesystem interface ? no more fetching the entire VM image before booting" (http://www.gluster.com/2011/07/27/glusters-shiny-new-connector-for-openstack/) >> See you! >> ? ? Ghe Rivero >> On Thu, Oct 20, 2011 at 4:03 PM, Diego Parrilla wrote: >>> >>> Hi, >>> my answers below, >>> >>> On Thu, Oct 20, 2011 at 3:47 PM, Boris-Michel Deschenes wrote: >>>> >>>> Hi guys, >>>> >>>> >>>> >>>> Just a quick note, I had this setup at some point (NFS-mounted /var/lib/nova/instances) which is essential to get live VM migrations if I?m not mistaken (live migration was working perfectly).? The problem I had with this setup was that the VM startup time was considerably slower than when the images were residing on a local disk (and I mean, even after all images are ?cached?). >>> >>> It's true. The fastest disk and the closest to the drive the better. >>> >>>> >>>> >>>> >>>> Basically an image will start the fastest when it is cached locally (local drive) >>> >>> Correct. >>> >>>> >>>> Then, not quite as fast when cached but on a NFS-mounted directory >>> >>> Correct. It takes some time to create the local disks. It's very important to have a good connection to the shared file system (it's not mandatory to use NFS). >>> >>>> >>>> Then really slowly when residing entirely on another disk and needed to be written locally to be cached. >>> >>> Right, it can take several minutes on a 1Gb. >>> >>>> >>>> >>>> >>>> These are the observations I made but I realize other factors weigh in (SAS vs SATA disk, network speed, etc.)? Please advise if you get the same speed in NFS-cached vs local-cached setup as it might convince me to go back to an NFS share (also were you using SAS disks to serve the NFS?). >>> >>> No, the performance on local disk is much higher than running a NFS on a 1Gb. For my perspective not only live migration is a must for our customers, but also the local virtual disks must persists a catastrophic failure of a nova-compute. That's the reason why recommend 10Gb and a good performant NFS file server connected. 15K or 10K SAS is not so relevant, the bottleneck is the network (speed and latency). There are also good solutions combining 10Gb + SSD Cache disks + 7.2KRPM SAS/SATA disks. >>> I would like to know what the people are using in real life deployments. Any more thoughts? >>> Regards >>> Diego >>> >>>> >>>> >>>> >>>> Thanks >>>> >>>> >>>> >>>> De?: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] De la part de Diego Parrilla >>>> Envoy??: 20 octobre 2011 04:54 >>>> ??: Till Mossakowski >>>> Cc?: openstack-operators at lists.openstack.org >>>> Objet?: Re: [Openstack-operators] Starting large VMs takes quite long >>>> >>>> >>>> >>>> Hi, >>>> >>>> >>>> >>>> my answers below. >>>> >>>> >>>> >>>> On Wed, Oct 19, 2011 at 8:12 PM, Till Mossakowski wrote: >>>> >>>> Hi, >>>> >>>> my answers below, >>>> >>>> many thanks for your quick answer. >>>> >>>> >>>> >>>> ? ?I have set up openstack using stackops. >>>> >>>> >>>> Good choice ;-) >>>> >>>> >>>> >>>> Yes, the stackops GUI is very nice. However, stackops is based on cactus, right? Is there a way of using diablo with stackops? Perhaps it is possible to upgrade the Ubuntu lucid distro that is coming with stackops to natty or oneiric and then upgrade to diablo using the source ppa:openstack-release/2011.3 for openstack? >>>> >>>> >>>> >>>> Yes, the 0.3 version with Diablo release is coming. We detected some QA issues. But things are working much better now. >>>> >>>> >>>> >>>> >>>> >>>> 5GB image it's not too big... we use NFS to share instances among nodes >>>> to help with the live migration and performance it's acceptable. How >>>> much is 'quite a while' in seconds? >>>> >>>> >>>> >>>> between half a minute and a minute (I haven't taken the exact time...). >>>> This is too long for our users. >>>> >>>> >>>> >>>> If the virtual disks are cached, launching a 40GB virtual machine takes less than 5 seconds in our test platform (IBM x3550M3 Dual Xeon 5620 64GB with NFS as shared storage on 1Gb) >>>> >>>> >>>> >>>> >>>> >>>> If you share the /var/lib/nova/instances with NFS, during the 'launch' >>>> process the base virtual image is copied to '_base'. Depending on the >>>> size of this file it will take longer. Once it's copied next time you >>>> use this image it should go much faster. >>>> >>>> Note: I have tested right now with a 1Gb launching a >25GB Windows VM >>>> and it took 3-4 minutes the first time. New Windows images, it took only >>>> a few seconds. >>>> >>>> >>>> >>>> This is interesting. Is there a way of telling the scheduler to prefer a compute node that already has copied the needed image? >>>> >>>> >>>> >>>> Try this: >>>> >>>> >>>> >>>> 1) Configure the compute nodes to use a shared directory with NFS on /var/lib/nova/instances >>>> >>>> 2) Launch ALL the virtual disks you need at runtime. It will take a while the first time. >>>> >>>> 3) Virtual disks are now cached in /var/lib/nova/instances/_base >>>> >>>> 4) Try to launch now the virtual disks again. They should start very fast. >>>> >>>> >>>> >>>> If you need some kind of assistance, please let me know. >>>> >>>> >>>> >>>> Regards >>>> >>>> Diego >>>> >>>> >>>> >>>> Best, Till >>>> >>>> -- >>>> Prof. Dr. Till Mossakowski ?Cartesium, room 2.51 Phone +49-421-218-64226 >>>> DFKI GmbH Bremen ? ? ? ? ? ? ? ? ? ? ? ? ? ? Fax +49-421-218-9864226 >>>> Safe & Secure Cognitive Systems ? ? ? ? ? ? Till.Mossakowski at dfki.de >>>> Enrique-Schmidt-Str. 5, D-28359 Bremen ? http://www.dfki.de/sks/till >>>> >>>> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH >>>> principal office, *not* the address for mail etc.!!!: >>>> Trippstadter Str. 122, D-67663 Kaiserslautern >>>> management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff >>>> supervisory board: Prof. Hans A. Aukes (chair) >>>> Amtsgericht Kaiserslautern, HRB 2313 >>>> >>>> >>>> >>>> -- >>>> >>>> Diego Parrilla >>>> CEO >>>> www.stackops.com?|??diego.parrilla at stackops.com?|?+34 649 94 43 29 |?skype:diegoparrilla >>>> >>>> ******************** ADVERTENCIA LEGAL ******************** >>>> Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. >>>> >>>> ***************** PRIVILEGED AND CONFIDENTIAL **************** >>>> We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. >>>> >>>> >>>> >>>> >>> >>> -- >>> Diego Parrilla >>> CEO >>> www.stackops.com?|??diego.parrilla at stackops.com?|?+34 649 94 43 29 |?skype:diegoparrilla >>> >>> ******************** ADVERTENCIA LEGAL ******************** >>> Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. >>> >>> ***************** PRIVILEGED AND CONFIDENTIAL **************** >>> We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. >>> >>> >>> _______________________________________________ >>> Openstack-operators mailing list >>> Openstack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> >> >> >> -- >> ?.''`.? Pienso, Luego Incordio >> : :' : >> `. `' >> ? `-? ? www.debian.org? ? www.hispalinux.es >> >> GPG Key: 26F020F7 >> GPG fingerprint: 4986 39DA D152 050B 4699? 9A71 66DB 5A36 26F0 20F7 >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > From wittwerch at gmail.com Fri Oct 21 15:01:38 2011 From: wittwerch at gmail.com (Christian Wittwer) Date: Fri, 21 Oct 2011 17:01:38 +0200 Subject: [Openstack-operators] nova-network assigned IP address In-Reply-To: References: <5E3DCAE61C95FA4397679425D7275D26055C296B3B@HQ-MX03.us.teo.earth> Message-ID: You can overwrite the gateway which dnsmasq should provide via dhcp. Works fine. foo:~# cat /etc/nova/dnsmasq.conf dhcp-option=3,10.2.20.1 Cheers, Christian 2011/10/21 Peter Borsody : > Hi, > > I had the exactly same problem.So, I patched the nova source code to work, > added some option to the point of dnsmasq managing. > Cheers, > Peter > On 20 October 2011 19:00, de Jong, Mark-Jan wrote: >> >> Hello, >> >> Is there a way to assign an IP address to nova-network other than the >> default gateway of the network? I want my guests to be directly connected to >> the ?public? network and don?t want nova-network to act as my router. I just >> need it for DHCP. Is this possible? >> >> >> >> Thanks! >> >> >> >> ,.,.,.,..,...,.,..,..,..,...,....,..,..,....,.... >> >> Mark-Jan de Jong >> >> O | 703-259-4406 >> >> C | 703-254-6284 >> >> >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > From slyphon at gmail.com Mon Oct 24 14:13:51 2011 From: slyphon at gmail.com (Jonathan Simms) Date: Mon, 24 Oct 2011 10:13:51 -0400 Subject: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift In-Reply-To: References: <395F6A92-D224-4A3D-BEC5-87625204DC93@zeroaccess.org> Message-ID: Thanks all for the information! I'm going to use this advice as part of the next round of hardware purchasing we're doing. On Thu, Oct 13, 2011 at 6:11 PM, Gordon Irving wrote: > > > If you are on a Battery Backed Unit raid controller, then its generally safe > to disable barriers for journal filesystems.? If your doing soft raid, jbod, > single disk arrays or cheaped out and did not get a BBU then you may want to > enable barriers for filesystem consistency. > > > > For raid cards with a BBU then set your io scheduler to noop, and disable > barriers.? The raid card does its own re-ordering of io operations, the OS > has an incomplete picture of the true drive geometry. ?The raid card is > emulating one disk geometry which could be an array of 2 ? 100+ disks.? The > OS simply can not make good judgment calls on how best to schedule io to > different parts of the disk because its built around the assumption of a > single spinning disk.? This is also true for if a write has made it safely > non persistent cache (ie disk cache), ?to a persistent cache (ie the battery > in your raid card) or persistent storage (that array of disks) .? ???This is > a failure of the Raid card <-> OS interface.? There simply is not the > richness to say (signal write is ok if on platter or persistent cache not > okay in disk cache) or > > > > Enabling barriers effectively turns all writes into Write-Through > operations, so the write goes straight to the disk platter and you get > little performance benefit from the raid card (which hurts a lot in terms of > lost iops). ??If the BBU looses charge/fails ?then the raid controller > downgrades to Write-Through (vs Write-Backed) operation. > > > > BBU ?raid controllers disable disk caches, as these are not safe in event of > power loss, and do not provide any benefit over the raid card cache. > > > > In the context of swift, hdfs and other highly replicated datastores, I run > them in jbod or raid-0 + nobarrier , noatime, nodiratime with a filesystem > aligned to the geometry of underlying storage* ?etc to squeeze as much > performance as possible out of the raw storage.? Let the application layer > deal with redundancy of data across the network, if a machine /disk dies ? > so what, you have N other copies of that data elsewhere on the network.? A > bit of storage is lost ? do consider how many nodes can be down at any time > when operating these sorts of clusters Big boxen with lots of storage may > seem attractive from a density perspective until you loose one and 25% of > your storage capacity with it ? many smaller baskets ? > > > > For network level data consistency ?swift should have a ?data scrubber > (periodic process to read and compare checksums of replicated blocks), I > have not checked if this is implemented or on the roadmap.?? I would be very > surprised if this was not a part of swift. > > > > *you can hint to the fs layer how to offset block writes by specifying a > stride width which is the number of data carrying disks in the array and the > block size typically the default is 64k for raid arrays > > > > From: openstack-operators-bounces at lists.openstack.org > [mailto:openstack-operators-bounces at lists.openstack.org] On Behalf Of Cole > Crawford > Sent: 13 October 2011 13:51 > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] XFS documentation seems to conflict with > recommendations in Swift > > > > generally mounting with -o nobarrier is a bad idea (ext4 or xfs), unless?you > have disks that do not have write caches. don't follow that > > recommendation, or for example - fsync won't work which is something swift > relies?upon. > > > > > > On Thu, Oct 13, 2011 at 9:18 AM, Marcelo Martins > wrote: > > Hi Jonathan, > > > > > > I guess that will depend on how your storage nodes are configured (hardware > wise). ?The reason why it's recommended is because the storage drives are > actually attached to a controller that has RiW cache enabled. > > > > > > > > Q. Should barriers be enabled with storage which has a persistent write > cache? > > Many hardware RAID have a persistent write cache which preserves it across > power failure, interface resets, system crashes, etc. Using write barriers > in this instance is not recommended and will in fact lower performance. > Therefore, it is recommended to turn off the barrier support and mount the > filesystem with "nobarrier". But take care about the hard disk write cache, > which should be off. > > > > > > Marcelo Martins > > Openstack-swift > > btorch-os at zeroaccess.org > > > > ?Knowledge is the wings on which our aspirations take flight and soar. When > it comes to surfing and life if you know what to do you can do it. If you > desire anything become educated about it and succeed. ? > > > > > > > > On Oct 12, 2011, at 10:08 AM, Jonathan Simms wrote: > > Hello all, > > I'm in the middle of a 120T Swift deployment, and I've had some > concerns about the backing filesystem. I formatted everything with > ext4 with 1024b inodes (for storing xattrs), but the process took so > long that I'm now looking at XFS again. In particular, this concerns > me http://xfs.org/index.php/XFS_FAQ#Write_barrier_support. > > In the swift documentation, it's recommended to mount the filesystems > w/ 'nobarrier', but it would seem to me that this would leave the data > open to corruption in the case of a crash. AFAIK, swift doesn't do > checksumming (and checksum checking) of stored data (after it is > written), which would mean that any data corruption would silently get > passed back to the users. > > Now, I haven't had operational experience running XFS in production, > I've mainly used ZFS, JFS, and ext{3,4}. Are there any recommendations > for using XFS safely in production? > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ________________________________ > Sophos Limited, The Pentagon, Abingdon Science Park, Abingdon, OX14 3YP, > United Kingdom. > Company Reg No 2096520. VAT Reg No GB 991 2418 08. > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > From J.O'Loughlin at surrey.ac.uk Tue Oct 25 21:23:11 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Tue, 25 Oct 2011 22:23:11 +0100 Subject: [Openstack-operators] swift accounts V users Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA4@EXMB01CMS.surrey.ac.uk> Hi All, having problems understanding the concept of a swift account and how it relates to a user. Can anybody provide an explanation? Can an account have multiple users associated with it? Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From andi.abes at gmail.com Tue Oct 25 22:42:29 2011 From: andi.abes at gmail.com (andi abes) Date: Tue, 25 Oct 2011 18:42:29 -0400 Subject: [Openstack-operators] swift accounts V users In-Reply-To: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA4@EXMB01CMS.surrey.ac.uk> References: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA4@EXMB01CMS.surrey.ac.uk> Message-ID: <8311731865945825881@unknownmsgid> An account maps to a tenant or a customer, and yes it can have many users and many containers. Access control is per user On Oct 25, 2011, at 17:24, "J.O'Loughlin at surrey.ac.uk" wrote: > > Hi All, > > having problems understanding the concept of a swift account and how it relates to a user. Can anybody provide an explanation? > Can an account have multiple users associated with it? > > Regards > > John O'Loughlin > FEPS IT, Service Delivery Team Leader > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From J.O'Loughlin at surrey.ac.uk Wed Oct 26 08:54:52 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Wed, 26 Oct 2011 09:54:52 +0100 Subject: [Openstack-operators] glance and swift Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA7@EXMB01CMS.surrey.ac.uk> Hi All, Has anybody managed to configure glance to use swift? I've created a glance account and user on swift and can upload files: >swift list -A https://127.0.0.1:8080/auth/v1.0/ -U glance:glance -K glance glance_bucket virtualization-2edition.pdf Now, I'm truing to update glance config, /etc/glance/glance-api.conf default_store = swift swift_store_auth_address = https://131.227.75.25:8080/auth/ swift_store_user = glance swift_store_key=glance swift_store_container = glance_bucket and restart glance, but when I upload images into nova they are ending up in local filesystem /var/lib/glance/images instead of in swift. Any help appreciated. Kind Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From betodalas at gmail.com Wed Oct 26 09:09:33 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 07:09:33 -0200 Subject: [Openstack-operators] Xen With Openstack Message-ID: Hello, I installed Compute and New Glance a separate server. I'm trying to create VM on Xen by Dashboard. The panel is the pending status logs and shows that XenServer's picking up the image of the Glance, but the machine is not created. Follow the log: [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin R:cdbc860b307a|audit] Host.call_plugin host = '9b3736e1-18ef-4147-8564-a9c64ed3 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; args = [ params: (dp0 S'auth_token' p1 NsS'glance_port' p2 I9292 sS'uuid_stack' p3 (lp4 S'a343ec2f-ad1c-4632-b7d9-1add8051c241' p5 aS'4b27c364-6626-4541-896a-65fb0d0b01d3' p6 asS'image_id' p7 S'4' p8 sS'glance_host' p9 S'10.168.1.30' p10 sS'sr_path' p11 S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' p12 s. ] [20111024T14:18:24.251Z| info|xenserver-opstack|746637|Async.host.call_plugin R:223f6eebc13d|dispatcher] spawning a new thread to handle the current task (tr ackid=a043138728544674d13b8d4a8ff673f7) [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin R:223f6eebc13d|audit] Host.call_plugin host = '9b3736e1-18ef-4147-8564-a9c64ed3 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args = [ ] [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] xe host-list username=root password=null Follow the nova.conf: --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --verbose #--libvirt_type=xen --s3_host=10.168.1.32 --rabbit_host=10.168.1.32 --cc_host=10.168.1.32 --ec2_url=http://10.168.1.32:8773/services/Cloud --fixed_range=192.168.1.0/24 --network_size=250 --ec2_api=10.168.1.32 --routing_source_ip=10.168.1.32 --verbose --sql_connection=mysql://root:status64 at 10.168.1.32/nova --network_manager=nova.network.manager.FlatManager --glance_api_servers=10.168.1.32:9292 --image_service=nova.image.glance.GlanceImageService --flat_network_bridge=xenbr0 --connection_type=xenapi --xenapi_connection_url=https://10.168.1.31 --xenapi_connection_username=root --xenapi_connection_password=status64 --reboot_timeout=600 --rescue_timeout=86400 --resize_confirm_window=86400 --allow_resize_to_same_host New log-in information compute.log shows cpu, memory, about Xen Sevres, but does not create machines. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.civitella at gmail.com Wed Oct 26 09:32:36 2011 From: giuseppe.civitella at gmail.com (Giuseppe Civitella) Date: Wed, 26 Oct 2011 11:32:36 +0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: Hi, did you check what happens on XenServer's dom0? Are there some pending gzip processes? Deploy of vhd images can fail if they're are not properly created. You can find the rigth procedure here: https://answers.launchpad.net/nova/+question/161683 Hope it helps Giuseppe 2011/10/26 Roberto Dalas Z. Benavides : > Hello, I installed Compute and New Glance a separate server. I'm trying to > create VM on Xen by Dashboard. The panel is the pending status logs and > shows that XenServer's picking up the image of the Glance, but the machine > is not created. Follow the log: > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > R:cdbc860b307a|audit] Host.call_plugin host = > '9b3736e1-18ef-4147-8564-a9c64ed3 > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; args = [ > params: (dp0 > S'auth_token' > p1 > NsS'glance_port' > p2 > I9292 > sS'uuid_stack' > p3 > (lp4 > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > p5 > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > p6 > asS'image_id' > p7 > S'4' > p8 > sS'glance_host' > p9 > S'10.168.1.30' > p10 > sS'sr_path' > p11 > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > p12 > s. ] > [20111024T14:18:24.251Z| > info|xenserver-opstack|746637|Async.host.call_plugin > R:223f6eebc13d|dispatcher] spawning a new thread to handle the current task > (tr > ackid=a043138728544674d13b8d4a8ff673f7) > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > R:223f6eebc13d|audit] Host.call_plugin host = > '9b3736e1-18ef-4147-8564-a9c64ed3 > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args = [ ] > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] xe > host-list username=root password=null > > Follow the nova.conf: > > --dhcpbridge_flagfile=/etc/nova/nova.conf > --dhcpbridge=/usr/bin/nova-dhcpbridge > --logdir=/var/log/nova > --state_path=/var/lib/nova > --lock_path=/var/lock/nova > --verbose > > #--libvirt_type=xen > --s3_host=10.168.1.32 > --rabbit_host=10.168.1.32 > --cc_host=10.168.1.32 > --ec2_url=http://10.168.1.32:8773/services/Cloud > --fixed_range=192.168.1.0/24 > --network_size=250 > --ec2_api=10.168.1.32 > --routing_source_ip=10.168.1.32 > --verbose > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > --network_manager=nova.network.manager.FlatManager > --glance_api_servers=10.168.1.32:9292 > --image_service=nova.image.glance.GlanceImageService > --flat_network_bridge=xenbr0 > --connection_type=xenapi > --xenapi_connection_url=https://10.168.1.31 > --xenapi_connection_username=root > --xenapi_connection_password=status64 > --reboot_timeout=600 > --rescue_timeout=86400 > --resize_confirm_window=86400 > --allow_resize_to_same_host > > New log-in information compute.log shows cpu, memory, about Xen Sevres, but > does not create machines. > > Thanks > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > From betodalas at gmail.com Wed Oct 26 09:43:56 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 07:43:56 -0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: A doubt, the new server, compute, must be within a XenServer virtual machine ? The image must actually be as gzip, or you can get on the same Glance as vhd ? 2011/10/26 Giuseppe Civitella > Hi, > > did you check what happens on XenServer's dom0? > Are there some pending gzip processes? > Deploy of vhd images can fail if they're are not properly created. > You can find the rigth procedure here: > https://answers.launchpad.net/nova/+question/161683 > > Hope it helps > Giuseppe > > > > 2011/10/26 Roberto Dalas Z. Benavides : > > Hello, I installed Compute and New Glance a separate server. I'm trying > to > > create VM on Xen by Dashboard. The panel is the pending status logs and > > shows that XenServer's picking up the image of the Glance, but the > machine > > is not created. Follow the log: > > > > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > > R:cdbc860b307a|audit] Host.call_plugin host = > > '9b3736e1-18ef-4147-8564-a9c64ed3 > > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; args = > [ > > params: (dp0 > > S'auth_token' > > p1 > > NsS'glance_port' > > p2 > > I9292 > > sS'uuid_stack' > > p3 > > (lp4 > > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > > p5 > > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > > p6 > > asS'image_id' > > p7 > > S'4' > > p8 > > sS'glance_host' > > p9 > > S'10.168.1.30' > > p10 > > sS'sr_path' > > p11 > > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > > p12 > > s. ] > > [20111024T14:18:24.251Z| > > info|xenserver-opstack|746637|Async.host.call_plugin > > R:223f6eebc13d|dispatcher] spawning a new thread to handle the current > task > > (tr > > ackid=a043138728544674d13b8d4a8ff673f7) > > > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > > R:223f6eebc13d|audit] Host.call_plugin host = > > '9b3736e1-18ef-4147-8564-a9c64ed3 > > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args = [ > ] > > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] xe > > host-list username=root password=null > > > > Follow the nova.conf: > > > > --dhcpbridge_flagfile=/etc/nova/nova.conf > > --dhcpbridge=/usr/bin/nova-dhcpbridge > > --logdir=/var/log/nova > > --state_path=/var/lib/nova > > --lock_path=/var/lock/nova > > --verbose > > > > #--libvirt_type=xen > > --s3_host=10.168.1.32 > > --rabbit_host=10.168.1.32 > > --cc_host=10.168.1.32 > > --ec2_url=http://10.168.1.32:8773/services/Cloud > > --fixed_range=192.168.1.0/24 > > --network_size=250 > > --ec2_api=10.168.1.32 > > --routing_source_ip=10.168.1.32 > > --verbose > > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > > --network_manager=nova.network.manager.FlatManager > > --glance_api_servers=10.168.1.32:9292 > > --image_service=nova.image.glance.GlanceImageService > > --flat_network_bridge=xenbr0 > > --connection_type=xenapi > > --xenapi_connection_url=https://10.168.1.31 > > --xenapi_connection_username=root > > --xenapi_connection_password=status64 > > --reboot_timeout=600 > > --rescue_timeout=86400 > > --resize_confirm_window=86400 > > --allow_resize_to_same_host > > > > New log-in information compute.log shows cpu, memory, about Xen Sevres, > but > > does not create machines. > > > > Thanks > > _______________________________________________ > > Openstack-operators mailing list > > Openstack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.civitella at gmail.com Wed Oct 26 10:00:29 2011 From: giuseppe.civitella at gmail.com (Giuseppe Civitella) Date: Wed, 26 Oct 2011 12:00:29 +0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: Yes, the nova-compute service has to run on a domU. You need to install XenServer's plugins on dom0 (have a look here: http://wiki.openstack.org/XenServerDevelopment). The domU will tell the dom0 to deploy images via xenapi. You need to extract you vhd image, rename it image.vhd and then gzip it. Glance plugin on XenServer expect vhd images to be gzipped, so if you don't compress them the deploy process will fail. Cheers, Giuseppe 2011/10/26 Roberto Dalas Z. Benavides : > A doubt, the new server, compute, must be within a XenServer virtual > machine? > The image must actually be as gzip, or you can get on the same Glance as > vhd? > > 2011/10/26 Giuseppe Civitella >> >> Hi, >> >> did you check what happens on XenServer's dom0? >> Are there some pending gzip processes? >> Deploy of vhd images can fail if they're are not properly created. >> You can find the rigth procedure here: >> https://answers.launchpad.net/nova/+question/161683 >> >> Hope it helps >> Giuseppe >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> > Hello, I installed Compute and New Glance a separate server. I'm trying >> > to >> > create VM on Xen by Dashboard. The panel is the pending status logs and >> > shows that XenServer's picking up the image of the Glance, but the >> > machine >> > is not created. Follow the log: >> > >> > >> > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin >> > R:cdbc860b307a|audit] Host.call_plugin host = >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; args >> > = [ >> > params: (dp0 >> > S'auth_token' >> > p1 >> > NsS'glance_port' >> > p2 >> > I9292 >> > sS'uuid_stack' >> > p3 >> > (lp4 >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' >> > p5 >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' >> > p6 >> > asS'image_id' >> > p7 >> > S'4' >> > p8 >> > sS'glance_host' >> > p9 >> > S'10.168.1.30' >> > p10 >> > sS'sr_path' >> > p11 >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' >> > p12 >> > s. ] >> > [20111024T14:18:24.251Z| >> > info|xenserver-opstack|746637|Async.host.call_plugin >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the current >> > task >> > (tr >> > ackid=a043138728544674d13b8d4a8ff673f7) >> > >> > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin >> > R:223f6eebc13d|audit] Host.call_plugin host = >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args = >> > [ ] >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] xe >> > host-list username=root password=null >> > >> > Follow the nova.conf: >> > >> > --dhcpbridge_flagfile=/etc/nova/nova.conf >> > --dhcpbridge=/usr/bin/nova-dhcpbridge >> > --logdir=/var/log/nova >> > --state_path=/var/lib/nova >> > --lock_path=/var/lock/nova >> > --verbose >> > >> > #--libvirt_type=xen >> > --s3_host=10.168.1.32 >> > --rabbit_host=10.168.1.32 >> > --cc_host=10.168.1.32 >> > --ec2_url=http://10.168.1.32:8773/services/Cloud >> > --fixed_range=192.168.1.0/24 >> > --network_size=250 >> > --ec2_api=10.168.1.32 >> > --routing_source_ip=10.168.1.32 >> > --verbose >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova >> > --network_manager=nova.network.manager.FlatManager >> > --glance_api_servers=10.168.1.32:9292 >> > --image_service=nova.image.glance.GlanceImageService >> > --flat_network_bridge=xenbr0 >> > --connection_type=xenapi >> > --xenapi_connection_url=https://10.168.1.31 >> > --xenapi_connection_username=root >> > --xenapi_connection_password=status64 >> > --reboot_timeout=600 >> > --rescue_timeout=86400 >> > --resize_confirm_window=86400 >> > --allow_resize_to_same_host >> > >> > New log-in information compute.log shows cpu, memory, about Xen Sevres, >> > but >> > does not create machines. >> > >> > Thanks >> > _______________________________________________ >> > Openstack-operators mailing list >> > Openstack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > >> > > > From betodalas at gmail.com Wed Oct 26 10:15:07 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 08:15:07 -0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: I have an image vmdk and am doing the following: add name = glance lucid_ovf disk_format container_format vhd = = = OVF is_public True > Yes, the nova-compute service has to run on a domU. > You need to install XenServer's plugins on dom0 (have a look here: > http://wiki.openstack.org/XenServerDevelopment). > The domU will tell the dom0 to deploy images via xenapi. > You need to extract you vhd image, rename it image.vhd and then gzip it. > Glance plugin on XenServer expect vhd images to be gzipped, so if you > don't compress them the deploy process will fail. > > Cheers, > Giuseppe > > > > > > > 2011/10/26 Roberto Dalas Z. Benavides : > > A doubt, the new server, compute, must be within a XenServer virtual > > machine? > > The image must actually be as gzip, or you can get on the same Glance as > > vhd? > > > > 2011/10/26 Giuseppe Civitella > >> > >> Hi, > >> > >> did you check what happens on XenServer's dom0? > >> Are there some pending gzip processes? > >> Deploy of vhd images can fail if they're are not properly created. > >> You can find the rigth procedure here: > >> https://answers.launchpad.net/nova/+question/161683 > >> > >> Hope it helps > >> Giuseppe > >> > >> > >> > >> 2011/10/26 Roberto Dalas Z. Benavides : > >> > Hello, I installed Compute and New Glance a separate server. I'm > trying > >> > to > >> > create VM on Xen by Dashboard. The panel is the pending status logs > and > >> > shows that XenServer's picking up the image of the Glance, but the > >> > machine > >> > is not created. Follow the log: > >> > > >> > > >> > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > >> > R:cdbc860b307a|audit] Host.call_plugin host = > >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; > args > >> > = [ > >> > params: (dp0 > >> > S'auth_token' > >> > p1 > >> > NsS'glance_port' > >> > p2 > >> > I9292 > >> > sS'uuid_stack' > >> > p3 > >> > (lp4 > >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > >> > p5 > >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > >> > p6 > >> > asS'image_id' > >> > p7 > >> > S'4' > >> > p8 > >> > sS'glance_host' > >> > p9 > >> > S'10.168.1.30' > >> > p10 > >> > sS'sr_path' > >> > p11 > >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > >> > p12 > >> > s. ] > >> > [20111024T14:18:24.251Z| > >> > info|xenserver-opstack|746637|Async.host.call_plugin > >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the current > >> > task > >> > (tr > >> > ackid=a043138728544674d13b8d4a8ff673f7) > >> > > >> > > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > >> > R:223f6eebc13d|audit] Host.call_plugin host = > >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args > = > >> > [ ] > >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] > xe > >> > host-list username=root password=null > >> > > >> > Follow the nova.conf: > >> > > >> > --dhcpbridge_flagfile=/etc/nova/nova.conf > >> > --dhcpbridge=/usr/bin/nova-dhcpbridge > >> > --logdir=/var/log/nova > >> > --state_path=/var/lib/nova > >> > --lock_path=/var/lock/nova > >> > --verbose > >> > > >> > #--libvirt_type=xen > >> > --s3_host=10.168.1.32 > >> > --rabbit_host=10.168.1.32 > >> > --cc_host=10.168.1.32 > >> > --ec2_url=http://10.168.1.32:8773/services/Cloud > >> > --fixed_range=192.168.1.0/24 > >> > --network_size=250 > >> > --ec2_api=10.168.1.32 > >> > --routing_source_ip=10.168.1.32 > >> > --verbose > >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > >> > --network_manager=nova.network.manager.FlatManager > >> > --glance_api_servers=10.168.1.32:9292 > >> > --image_service=nova.image.glance.GlanceImageService > >> > --flat_network_bridge=xenbr0 > >> > --connection_type=xenapi > >> > --xenapi_connection_url=https://10.168.1.31 > >> > --xenapi_connection_username=root > >> > --xenapi_connection_password=status64 > >> > --reboot_timeout=600 > >> > --rescue_timeout=86400 > >> > --resize_confirm_window=86400 > >> > --allow_resize_to_same_host > >> > > >> > New log-in information compute.log shows cpu, memory, about Xen > Sevres, > >> > but > >> > does not create machines. > >> > > >> > Thanks > >> > _______________________________________________ > >> > Openstack-operators mailing list > >> > Openstack-operators at lists.openstack.org > >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.civitella at gmail.com Wed Oct 26 10:23:22 2011 From: giuseppe.civitella at gmail.com (Giuseppe Civitella) Date: Wed, 26 Oct 2011 12:23:22 +0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: It has to be a vhd image. You can try XenConverter to get a vhd from a vmdk. Cheers, Giuseppe 2011/10/26 Roberto Dalas Z. Benavides : > I have an image vmdk and am doing the following: > add name = glance lucid_ovf disk_format container_format vhd = = = OVF > is_public True > Thanks > > 2011/10/26 Giuseppe Civitella >> >> Yes, the nova-compute service has to run on a domU. >> You need to install XenServer's plugins on dom0 (have a look here: >> http://wiki.openstack.org/XenServerDevelopment). >> The domU will tell the dom0 to deploy images via xenapi. >> You need to extract you vhd image, rename it image.vhd and then gzip it. >> Glance plugin on XenServer expect vhd images to be gzipped, so if you >> don't compress them the deploy process will fail. >> >> Cheers, >> Giuseppe >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> > A doubt, the new server, compute, must be within a XenServer virtual >> > machine? >> > The image must actually be as gzip, or you can get on the same Glance as >> > vhd? >> > >> > 2011/10/26 Giuseppe Civitella >> >> >> >> Hi, >> >> >> >> did you check what happens on XenServer's dom0? >> >> Are there some pending gzip processes? >> >> Deploy of vhd images can fail if they're are not properly created. >> >> You can find the rigth procedure here: >> >> https://answers.launchpad.net/nova/+question/161683 >> >> >> >> Hope it helps >> >> Giuseppe >> >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> > Hello, I installed Compute and New Glance a separate server. I'm >> >> > trying >> >> > to >> >> > create VM on Xen by Dashboard. The panel is the pending status logs >> >> > and >> >> > shows that XenServer's picking up the image of the Glance, but the >> >> > machine >> >> > is not created. Follow the log: >> >> > >> >> > >> >> > >> >> > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin >> >> > R:cdbc860b307a|audit] Host.call_plugin host = >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; >> >> > args >> >> > = [ >> >> > params: (dp0 >> >> > S'auth_token' >> >> > p1 >> >> > NsS'glance_port' >> >> > p2 >> >> > I9292 >> >> > sS'uuid_stack' >> >> > p3 >> >> > (lp4 >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' >> >> > p5 >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' >> >> > p6 >> >> > asS'image_id' >> >> > p7 >> >> > S'4' >> >> > p8 >> >> > sS'glance_host' >> >> > p9 >> >> > S'10.168.1.30' >> >> > p10 >> >> > sS'sr_path' >> >> > p11 >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' >> >> > p12 >> >> > s. ] >> >> > [20111024T14:18:24.251Z| >> >> > info|xenserver-opstack|746637|Async.host.call_plugin >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the >> >> > current >> >> > task >> >> > (tr >> >> > ackid=a043138728544674d13b8d4a8ff673f7) >> >> > >> >> > >> >> > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin >> >> > R:223f6eebc13d|audit] Host.call_plugin host = >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args >> >> > = >> >> > [ ] >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] >> >> > xe >> >> > host-list username=root password=null >> >> > >> >> > Follow the nova.conf: >> >> > >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge >> >> > --logdir=/var/log/nova >> >> > --state_path=/var/lib/nova >> >> > --lock_path=/var/lock/nova >> >> > --verbose >> >> > >> >> > #--libvirt_type=xen >> >> > --s3_host=10.168.1.32 >> >> > --rabbit_host=10.168.1.32 >> >> > --cc_host=10.168.1.32 >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud >> >> > --fixed_range=192.168.1.0/24 >> >> > --network_size=250 >> >> > --ec2_api=10.168.1.32 >> >> > --routing_source_ip=10.168.1.32 >> >> > --verbose >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova >> >> > --network_manager=nova.network.manager.FlatManager >> >> > --glance_api_servers=10.168.1.32:9292 >> >> > --image_service=nova.image.glance.GlanceImageService >> >> > --flat_network_bridge=xenbr0 >> >> > --connection_type=xenapi >> >> > --xenapi_connection_url=https://10.168.1.31 >> >> > --xenapi_connection_username=root >> >> > --xenapi_connection_password=status64 >> >> > --reboot_timeout=600 >> >> > --rescue_timeout=86400 >> >> > --resize_confirm_window=86400 >> >> > --allow_resize_to_same_host >> >> > >> >> > New log-in information compute.log shows cpu, memory, about Xen >> >> > Sevres, >> >> > but >> >> > does not create machines. >> >> > >> >> > Thanks >> >> > _______________________________________________ >> >> > Openstack-operators mailing list >> >> > Openstack-operators at lists.openstack.org >> >> > >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > >> >> > >> > >> > > > From betodalas at gmail.com Wed Oct 26 11:30:24 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 09:30:24 -0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: Can i use the command ? add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf > is_public True < imagem.vhd 2011/10/26 Giuseppe Civitella > It has to be a vhd image. > You can try XenConverter to get a vhd from a vmdk. > > Cheers, > Giuseppe > > 2011/10/26 Roberto Dalas Z. Benavides : > > I have an image vmdk and am doing the following: > > add name = glance lucid_ovf disk_format container_format vhd = = = OVF > > is_public True > > > Thanks > > > > 2011/10/26 Giuseppe Civitella > >> > >> Yes, the nova-compute service has to run on a domU. > >> You need to install XenServer's plugins on dom0 (have a look here: > >> http://wiki.openstack.org/XenServerDevelopment). > >> The domU will tell the dom0 to deploy images via xenapi. > >> You need to extract you vhd image, rename it image.vhd and then gzip it. > >> Glance plugin on XenServer expect vhd images to be gzipped, so if you > >> don't compress them the deploy process will fail. > >> > >> Cheers, > >> Giuseppe > >> > >> > >> > >> > >> > >> > >> 2011/10/26 Roberto Dalas Z. Benavides : > >> > A doubt, the new server, compute, must be within a XenServer virtual > >> > machine? > >> > The image must actually be as gzip, or you can get on the same Glance > as > >> > vhd? > >> > > >> > 2011/10/26 Giuseppe Civitella > >> >> > >> >> Hi, > >> >> > >> >> did you check what happens on XenServer's dom0? > >> >> Are there some pending gzip processes? > >> >> Deploy of vhd images can fail if they're are not properly created. > >> >> You can find the rigth procedure here: > >> >> https://answers.launchpad.net/nova/+question/161683 > >> >> > >> >> Hope it helps > >> >> Giuseppe > >> >> > >> >> > >> >> > >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> > Hello, I installed Compute and New Glance a separate server. I'm > >> >> > trying > >> >> > to > >> >> > create VM on Xen by Dashboard. The panel is the pending status logs > >> >> > and > >> >> > shows that XenServer's picking up the image of the Glance, but the > >> >> > machine > >> >> > is not created. Follow the log: > >> >> > > >> >> > > >> >> > > >> >> > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > >> >> > R:cdbc860b307a|audit] Host.call_plugin host = > >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; > >> >> > args > >> >> > = [ > >> >> > params: (dp0 > >> >> > S'auth_token' > >> >> > p1 > >> >> > NsS'glance_port' > >> >> > p2 > >> >> > I9292 > >> >> > sS'uuid_stack' > >> >> > p3 > >> >> > (lp4 > >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > >> >> > p5 > >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > >> >> > p6 > >> >> > asS'image_id' > >> >> > p7 > >> >> > S'4' > >> >> > p8 > >> >> > sS'glance_host' > >> >> > p9 > >> >> > S'10.168.1.30' > >> >> > p10 > >> >> > sS'sr_path' > >> >> > p11 > >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > >> >> > p12 > >> >> > s. ] > >> >> > [20111024T14:18:24.251Z| > >> >> > info|xenserver-opstack|746637|Async.host.call_plugin > >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the > >> >> > current > >> >> > task > >> >> > (tr > >> >> > ackid=a043138728544674d13b8d4a8ff673f7) > >> >> > > >> >> > > >> >> > > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > >> >> > R:223f6eebc13d|audit] Host.call_plugin host = > >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; > args > >> >> > = > >> >> > [ ] > >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 > unix-RPC||cli] > >> >> > xe > >> >> > host-list username=root password=null > >> >> > > >> >> > Follow the nova.conf: > >> >> > > >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf > >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge > >> >> > --logdir=/var/log/nova > >> >> > --state_path=/var/lib/nova > >> >> > --lock_path=/var/lock/nova > >> >> > --verbose > >> >> > > >> >> > #--libvirt_type=xen > >> >> > --s3_host=10.168.1.32 > >> >> > --rabbit_host=10.168.1.32 > >> >> > --cc_host=10.168.1.32 > >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud > >> >> > --fixed_range=192.168.1.0/24 > >> >> > --network_size=250 > >> >> > --ec2_api=10.168.1.32 > >> >> > --routing_source_ip=10.168.1.32 > >> >> > --verbose > >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > >> >> > --network_manager=nova.network.manager.FlatManager > >> >> > --glance_api_servers=10.168.1.32:9292 > >> >> > --image_service=nova.image.glance.GlanceImageService > >> >> > --flat_network_bridge=xenbr0 > >> >> > --connection_type=xenapi > >> >> > --xenapi_connection_url=https://10.168.1.31 > >> >> > --xenapi_connection_username=root > >> >> > --xenapi_connection_password=status64 > >> >> > --reboot_timeout=600 > >> >> > --rescue_timeout=86400 > >> >> > --resize_confirm_window=86400 > >> >> > --allow_resize_to_same_host > >> >> > > >> >> > New log-in information compute.log shows cpu, memory, about Xen > >> >> > Sevres, > >> >> > but > >> >> > does not create machines. > >> >> > > >> >> > Thanks > >> >> > _______________________________________________ > >> >> > Openstack-operators mailing list > >> >> > Openstack-operators at lists.openstack.org > >> >> > > >> >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.civitella at gmail.com Wed Oct 26 11:44:38 2011 From: giuseppe.civitella at gmail.com (Giuseppe Civitella) Date: Wed, 26 Oct 2011 13:44:38 +0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: If imagem.vhd is a gzipped tar archive containing a file called image.vhd, your command should work this way: glance add name=lucid_ovf disk_format=vhd container_format=ovf is_public=True < imagem.vhd 2011/10/26 Roberto Dalas Z. Benavides : > Can i use the command ? > > add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf > > is_public True < imagem.vhd > > 2011/10/26 Giuseppe Civitella >> >> It has to be a vhd image. >> You can try XenConverter to get a vhd from a vmdk. >> >> Cheers, >> Giuseppe >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> > I have an image vmdk and am doing the following: >> > add name = glance lucid_ovf disk_format container_format vhd = = = OVF >> > is_public True > > >> > Thanks >> > >> > 2011/10/26 Giuseppe Civitella >> >> >> >> Yes, the nova-compute service has to run on a domU. >> >> You need to install XenServer's plugins on dom0 (have a look here: >> >> http://wiki.openstack.org/XenServerDevelopment). >> >> The domU will tell the dom0 to deploy images via xenapi. >> >> You need to extract you vhd image, rename it image.vhd and then gzip >> >> it. >> >> Glance plugin on XenServer expect vhd images to be gzipped, so if you >> >> don't compress them the deploy process will fail. >> >> >> >> Cheers, >> >> Giuseppe >> >> >> >> >> >> >> >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> > A doubt, the new server, compute, must be within a XenServer virtual >> >> > machine? >> >> > The image must actually be as gzip, or you can get on the same Glance >> >> > as >> >> > vhd? >> >> > >> >> > 2011/10/26 Giuseppe Civitella >> >> >> >> >> >> Hi, >> >> >> >> >> >> did you check what happens on XenServer's dom0? >> >> >> Are there some pending gzip processes? >> >> >> Deploy of vhd images can fail if they're are not properly created. >> >> >> You can find the rigth procedure here: >> >> >> https://answers.launchpad.net/nova/+question/161683 >> >> >> >> >> >> Hope it helps >> >> >> Giuseppe >> >> >> >> >> >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> >> > Hello, I installed Compute and New Glance a separate server. I'm >> >> >> > trying >> >> >> > to >> >> >> > create VM on Xen by Dashboard. The panel is the pending status >> >> >> > logs >> >> >> > and >> >> >> > shows that XenServer's picking up the image of the Glance, but the >> >> >> > machine >> >> >> > is not created. Follow the log: >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin >> >> >> > R:cdbc860b307a|audit] Host.call_plugin host = >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; >> >> >> > args >> >> >> > = [ >> >> >> > params: (dp0 >> >> >> > S'auth_token' >> >> >> > p1 >> >> >> > NsS'glance_port' >> >> >> > p2 >> >> >> > I9292 >> >> >> > sS'uuid_stack' >> >> >> > p3 >> >> >> > (lp4 >> >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' >> >> >> > p5 >> >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' >> >> >> > p6 >> >> >> > asS'image_id' >> >> >> > p7 >> >> >> > S'4' >> >> >> > p8 >> >> >> > sS'glance_host' >> >> >> > p9 >> >> >> > S'10.168.1.30' >> >> >> > p10 >> >> >> > sS'sr_path' >> >> >> > p11 >> >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' >> >> >> > p12 >> >> >> > s. ] >> >> >> > [20111024T14:18:24.251Z| >> >> >> > info|xenserver-opstack|746637|Async.host.call_plugin >> >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the >> >> >> > current >> >> >> > task >> >> >> > (tr >> >> >> > ackid=a043138728544674d13b8d4a8ff673f7) >> >> >> > >> >> >> > >> >> >> > >> >> >> > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin >> >> >> > R:223f6eebc13d|audit] Host.call_plugin host = >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; >> >> >> > args >> >> >> > = >> >> >> > [ ] >> >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 >> >> >> > unix-RPC||cli] >> >> >> > xe >> >> >> > host-list username=root password=null >> >> >> > >> >> >> > Follow the nova.conf: >> >> >> > >> >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf >> >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge >> >> >> > --logdir=/var/log/nova >> >> >> > --state_path=/var/lib/nova >> >> >> > --lock_path=/var/lock/nova >> >> >> > --verbose >> >> >> > >> >> >> > #--libvirt_type=xen >> >> >> > --s3_host=10.168.1.32 >> >> >> > --rabbit_host=10.168.1.32 >> >> >> > --cc_host=10.168.1.32 >> >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud >> >> >> > --fixed_range=192.168.1.0/24 >> >> >> > --network_size=250 >> >> >> > --ec2_api=10.168.1.32 >> >> >> > --routing_source_ip=10.168.1.32 >> >> >> > --verbose >> >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova >> >> >> > --network_manager=nova.network.manager.FlatManager >> >> >> > --glance_api_servers=10.168.1.32:9292 >> >> >> > --image_service=nova.image.glance.GlanceImageService >> >> >> > --flat_network_bridge=xenbr0 >> >> >> > --connection_type=xenapi >> >> >> > --xenapi_connection_url=https://10.168.1.31 >> >> >> > --xenapi_connection_username=root >> >> >> > --xenapi_connection_password=status64 >> >> >> > --reboot_timeout=600 >> >> >> > --rescue_timeout=86400 >> >> >> > --resize_confirm_window=86400 >> >> >> > --allow_resize_to_same_host >> >> >> > >> >> >> > New log-in information compute.log shows cpu, memory, about Xen >> >> >> > Sevres, >> >> >> > but >> >> >> > does not create machines. >> >> >> > >> >> >> > Thanks >> >> >> > _______________________________________________ >> >> >> > Openstack-operators mailing list >> >> >> > Openstack-operators at lists.openstack.org >> >> >> > >> >> >> > >> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> > >> >> >> > >> >> > >> >> > >> > >> > > > From J.O'Loughlin at surrey.ac.uk Wed Oct 26 11:48:59 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Wed, 26 Oct 2011 12:48:59 +0100 Subject: [Openstack-operators] glance and swift In-Reply-To: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA7@EXMB01CMS.surrey.ac.uk> References: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA7@EXMB01CMS.surrey.ac.uk> Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CAA@EXMB01CMS.surrey.ac.uk> And this is the error message I'm seeing in the logs ERROR [glance.store.swift] Could not find swift_store_auth_address in configuration options Regards John O'Loughlin FEPS IT, Service Delivery Team Leader ________________________________________ From: openstack-operators-bounces at lists.openstack.org [openstack-operators-bounces at lists.openstack.org] On Behalf Of J.O'Loughlin at surrey.ac.uk [J.O'Loughlin at surrey.ac.uk] Sent: 26 October 2011 09:54 To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] glance and swift Hi All, Has anybody managed to configure glance to use swift? I've created a glance account and user on swift and can upload files: >swift list -A https://127.0.0.1:8080/auth/v1.0/ -U glance:glance -K glance glance_bucket virtualization-2edition.pdf Now, I'm truing to update glance config, /etc/glance/glance-api.conf default_store = swift swift_store_auth_address = https://131.227.75.25:8080/auth/ swift_store_user = glance swift_store_key=glance swift_store_container = glance_bucket and restart glance, but when I upload images into nova they are ending up in local filesystem /var/lib/glance/images instead of in swift. Any help appreciated. Kind Regards John O'Loughlin FEPS IT, Service Delivery Team Leader _______________________________________________ Openstack-operators mailing list Openstack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From betodalas at gmail.com Wed Oct 26 12:43:29 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 10:43:29 -0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: My Openstack Versions is 2012.1-dev (2012.1-LOCALBRANCH:LOCALREVISION) Is correct? Which version is more stable? 2011/10/26 Giuseppe Civitella > If imagem.vhd is a gzipped tar archive containing a file called > image.vhd, your command should work this way: > glance add name=lucid_ovf disk_format=vhd container_format=ovf > is_public=True < imagem.vhd > > > > 2011/10/26 Roberto Dalas Z. Benavides : > > Can i use the command ? > > > > add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf > > > is_public True < imagem.vhd > > > > 2011/10/26 Giuseppe Civitella > >> > >> It has to be a vhd image. > >> You can try XenConverter to get a vhd from a vmdk. > >> > >> Cheers, > >> Giuseppe > >> > >> 2011/10/26 Roberto Dalas Z. Benavides : > >> > I have an image vmdk and am doing the following: > >> > add name = glance lucid_ovf disk_format container_format vhd = = = OVF > >> > is_public True >> > > >> > Thanks > >> > > >> > 2011/10/26 Giuseppe Civitella > >> >> > >> >> Yes, the nova-compute service has to run on a domU. > >> >> You need to install XenServer's plugins on dom0 (have a look here: > >> >> http://wiki.openstack.org/XenServerDevelopment). > >> >> The domU will tell the dom0 to deploy images via xenapi. > >> >> You need to extract you vhd image, rename it image.vhd and then gzip > >> >> it. > >> >> Glance plugin on XenServer expect vhd images to be gzipped, so if you > >> >> don't compress them the deploy process will fail. > >> >> > >> >> Cheers, > >> >> Giuseppe > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> > A doubt, the new server, compute, must be within a XenServer > virtual > >> >> > machine? > >> >> > The image must actually be as gzip, or you can get on the same > Glance > >> >> > as > >> >> > vhd? > >> >> > > >> >> > 2011/10/26 Giuseppe Civitella > >> >> >> > >> >> >> Hi, > >> >> >> > >> >> >> did you check what happens on XenServer's dom0? > >> >> >> Are there some pending gzip processes? > >> >> >> Deploy of vhd images can fail if they're are not properly created. > >> >> >> You can find the rigth procedure here: > >> >> >> https://answers.launchpad.net/nova/+question/161683 > >> >> >> > >> >> >> Hope it helps > >> >> >> Giuseppe > >> >> >> > >> >> >> > >> >> >> > >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> >> > Hello, I installed Compute and New Glance a separate server. I'm > >> >> >> > trying > >> >> >> > to > >> >> >> > create VM on Xen by Dashboard. The panel is the pending status > >> >> >> > logs > >> >> >> > and > >> >> >> > shows that XenServer's picking up the image of the Glance, but > the > >> >> >> > machine > >> >> >> > is not created. Follow the log: > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > >> >> >> > R:cdbc860b307a|audit] Host.call_plugin host = > >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = > 'download_vhd'; > >> >> >> > args > >> >> >> > = [ > >> >> >> > params: (dp0 > >> >> >> > S'auth_token' > >> >> >> > p1 > >> >> >> > NsS'glance_port' > >> >> >> > p2 > >> >> >> > I9292 > >> >> >> > sS'uuid_stack' > >> >> >> > p3 > >> >> >> > (lp4 > >> >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > >> >> >> > p5 > >> >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > >> >> >> > p6 > >> >> >> > asS'image_id' > >> >> >> > p7 > >> >> >> > S'4' > >> >> >> > p8 > >> >> >> > sS'glance_host' > >> >> >> > p9 > >> >> >> > S'10.168.1.30' > >> >> >> > p10 > >> >> >> > sS'sr_path' > >> >> >> > p11 > >> >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > >> >> >> > p12 > >> >> >> > s. ] > >> >> >> > [20111024T14:18:24.251Z| > >> >> >> > info|xenserver-opstack|746637|Async.host.call_plugin > >> >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the > >> >> >> > current > >> >> >> > task > >> >> >> > (tr > >> >> >> > ackid=a043138728544674d13b8d4a8ff673f7) > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > >> >> >> > R:223f6eebc13d|audit] Host.call_plugin host = > >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; > >> >> >> > args > >> >> >> > = > >> >> >> > [ ] > >> >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 > >> >> >> > unix-RPC||cli] > >> >> >> > xe > >> >> >> > host-list username=root password=null > >> >> >> > > >> >> >> > Follow the nova.conf: > >> >> >> > > >> >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf > >> >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge > >> >> >> > --logdir=/var/log/nova > >> >> >> > --state_path=/var/lib/nova > >> >> >> > --lock_path=/var/lock/nova > >> >> >> > --verbose > >> >> >> > > >> >> >> > #--libvirt_type=xen > >> >> >> > --s3_host=10.168.1.32 > >> >> >> > --rabbit_host=10.168.1.32 > >> >> >> > --cc_host=10.168.1.32 > >> >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud > >> >> >> > --fixed_range=192.168.1.0/24 > >> >> >> > --network_size=250 > >> >> >> > --ec2_api=10.168.1.32 > >> >> >> > --routing_source_ip=10.168.1.32 > >> >> >> > --verbose > >> >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > >> >> >> > --network_manager=nova.network.manager.FlatManager > >> >> >> > --glance_api_servers=10.168.1.32:9292 > >> >> >> > --image_service=nova.image.glance.GlanceImageService > >> >> >> > --flat_network_bridge=xenbr0 > >> >> >> > --connection_type=xenapi > >> >> >> > --xenapi_connection_url=https://10.168.1.31 > >> >> >> > --xenapi_connection_username=root > >> >> >> > --xenapi_connection_password=status64 > >> >> >> > --reboot_timeout=600 > >> >> >> > --rescue_timeout=86400 > >> >> >> > --resize_confirm_window=86400 > >> >> >> > --allow_resize_to_same_host > >> >> >> > > >> >> >> > New log-in information compute.log shows cpu, memory, about Xen > >> >> >> > Sevres, > >> >> >> > but > >> >> >> > does not create machines. > >> >> >> > > >> >> >> > Thanks > >> >> >> > _______________________________________________ > >> >> >> > Openstack-operators mailing list > >> >> >> > Openstack-operators at lists.openstack.org > >> >> >> > > >> >> >> > > >> >> >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> >> >> > > >> >> >> > > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.civitella at gmail.com Wed Oct 26 12:57:29 2011 From: giuseppe.civitella at gmail.com (Giuseppe Civitella) Date: Wed, 26 Oct 2011 14:57:29 +0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: I'm currently using Diablo (2011.3-nova-milestone-tarball:tarmac-20110922115702-k9nkvxqzhj130av2) and it works with XenServer 5.6 (it should with XCP 1.1 too). I did non try yet Essex, sorry. 2011/10/26 Roberto Dalas Z. Benavides : > My Openstack Versions is > > 2012.1-dev (2012.1-LOCALBRANCH:LOCALREVISION) > > Is correct? > > Which version is more stable? > > 2011/10/26 Giuseppe Civitella >> >> If ?imagem.vhd is a gzipped tar archive containing a file called >> image.vhd, your command should work this way: >> glance add name=lucid_ovf disk_format=vhd container_format=ovf >> is_public=True < imagem.vhd >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> > Can i use the command ? >> > >> > add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf > >> > is_public True < imagem.vhd >> > >> > 2011/10/26 Giuseppe Civitella >> >> >> >> It has to be a vhd image. >> >> You can try XenConverter to get a vhd from a vmdk. >> >> >> >> Cheers, >> >> Giuseppe >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> > I have an image vmdk and am doing the following: >> >> > add name = glance lucid_ovf disk_format container_format vhd = = = >> >> > OVF >> >> > is_public True > >> > >> >> > Thanks >> >> > >> >> > 2011/10/26 Giuseppe Civitella >> >> >> >> >> >> Yes, the nova-compute service has to run on a domU. >> >> >> You need to install XenServer's plugins on dom0 (have a look here: >> >> >> http://wiki.openstack.org/XenServerDevelopment). >> >> >> The domU will tell the dom0 to deploy images via xenapi. >> >> >> You need to extract you vhd image, rename it image.vhd and then gzip >> >> >> it. >> >> >> Glance plugin on XenServer expect vhd images to be gzipped, so if >> >> >> you >> >> >> don't compress them the deploy process will fail. >> >> >> >> >> >> Cheers, >> >> >> Giuseppe >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> >> > A doubt, the new server, compute, must be within a XenServer >> >> >> > virtual >> >> >> > machine? >> >> >> > The image must actually be as gzip, or you can get on the same >> >> >> > Glance >> >> >> > as >> >> >> > vhd? >> >> >> > >> >> >> > 2011/10/26 Giuseppe Civitella >> >> >> >> >> >> >> >> Hi, >> >> >> >> >> >> >> >> did you check what happens on XenServer's dom0? >> >> >> >> Are there some pending gzip processes? >> >> >> >> Deploy of vhd images can fail if they're are not properly >> >> >> >> created. >> >> >> >> You can find the rigth procedure here: >> >> >> >> https://answers.launchpad.net/nova/+question/161683 >> >> >> >> >> >> >> >> Hope it helps >> >> >> >> Giuseppe >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> >> >> > Hello, I installed Compute and New Glance a separate server. >> >> >> >> > I'm >> >> >> >> > trying >> >> >> >> > to >> >> >> >> > create VM on Xen by Dashboard. The panel is the pending status >> >> >> >> > logs >> >> >> >> > and >> >> >> >> > shows that XenServer's picking up the image of the Glance, but >> >> >> >> > the >> >> >> >> > machine >> >> >> >> > is not created. Follow the log: >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin >> >> >> >> > R:cdbc860b307a|audit] Host.call_plugin host = >> >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = >> >> >> >> > 'download_vhd'; >> >> >> >> > args >> >> >> >> > = [ >> >> >> >> > params: (dp0 >> >> >> >> > S'auth_token' >> >> >> >> > p1 >> >> >> >> > NsS'glance_port' >> >> >> >> > p2 >> >> >> >> > I9292 >> >> >> >> > sS'uuid_stack' >> >> >> >> > p3 >> >> >> >> > (lp4 >> >> >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' >> >> >> >> > p5 >> >> >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' >> >> >> >> > p6 >> >> >> >> > asS'image_id' >> >> >> >> > p7 >> >> >> >> > S'4' >> >> >> >> > p8 >> >> >> >> > sS'glance_host' >> >> >> >> > p9 >> >> >> >> > S'10.168.1.30' >> >> >> >> > p10 >> >> >> >> > sS'sr_path' >> >> >> >> > p11 >> >> >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' >> >> >> >> > p12 >> >> >> >> > s. ] >> >> >> >> > [20111024T14:18:24.251Z| >> >> >> >> > info|xenserver-opstack|746637|Async.host.call_plugin >> >> >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the >> >> >> >> > current >> >> >> >> > task >> >> >> >> > (tr >> >> >> >> > ackid=a043138728544674d13b8d4a8ff673f7) >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin >> >> >> >> > R:223f6eebc13d|audit] Host.call_plugin host = >> >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = >> >> >> >> > 'host_data'; >> >> >> >> > args >> >> >> >> > = >> >> >> >> > [ ] >> >> >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 >> >> >> >> > unix-RPC||cli] >> >> >> >> > xe >> >> >> >> > host-list username=root password=null >> >> >> >> > >> >> >> >> > Follow the nova.conf: >> >> >> >> > >> >> >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf >> >> >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge >> >> >> >> > --logdir=/var/log/nova >> >> >> >> > --state_path=/var/lib/nova >> >> >> >> > --lock_path=/var/lock/nova >> >> >> >> > --verbose >> >> >> >> > >> >> >> >> > #--libvirt_type=xen >> >> >> >> > --s3_host=10.168.1.32 >> >> >> >> > --rabbit_host=10.168.1.32 >> >> >> >> > --cc_host=10.168.1.32 >> >> >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud >> >> >> >> > --fixed_range=192.168.1.0/24 >> >> >> >> > --network_size=250 >> >> >> >> > --ec2_api=10.168.1.32 >> >> >> >> > --routing_source_ip=10.168.1.32 >> >> >> >> > --verbose >> >> >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova >> >> >> >> > --network_manager=nova.network.manager.FlatManager >> >> >> >> > --glance_api_servers=10.168.1.32:9292 >> >> >> >> > --image_service=nova.image.glance.GlanceImageService >> >> >> >> > --flat_network_bridge=xenbr0 >> >> >> >> > --connection_type=xenapi >> >> >> >> > --xenapi_connection_url=https://10.168.1.31 >> >> >> >> > --xenapi_connection_username=root >> >> >> >> > --xenapi_connection_password=status64 >> >> >> >> > --reboot_timeout=600 >> >> >> >> > --rescue_timeout=86400 >> >> >> >> > --resize_confirm_window=86400 >> >> >> >> > --allow_resize_to_same_host >> >> >> >> > >> >> >> >> > New log-in information compute.log shows cpu, memory, about Xen >> >> >> >> > Sevres, >> >> >> >> > but >> >> >> >> > does not create machines. >> >> >> >> > >> >> >> >> > Thanks >> >> >> >> > _______________________________________________ >> >> >> >> > Openstack-operators mailing list >> >> >> >> > Openstack-operators at lists.openstack.org >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> >> > >> >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> > >> > > > From J.O'Loughlin at surrey.ac.uk Wed Oct 26 13:16:49 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Wed, 26 Oct 2011 14:16:49 +0100 Subject: [Openstack-operators] Roles Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CAB@EXMB01CMS.surrey.ac.uk> Hi All, I'm running trunk on 10.10 I've just created a user and added to a project: nova-manage user create tom nova-project add project2 tom at this stage no roles added: my understanding is that a euca-describe-images should just show images in project? the new user can see all images, all instances in all projects, can start an instance from any image even if marked private, can allocate themselves an address and can then assign that to any other user instances! After the above I gave tom the sysadmin role (global and then in the project). Makes no difference to what they can and cant do. Is this normal behaviour? Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From renato at dualtec.com.br Wed Oct 26 14:25:40 2011 From: renato at dualtec.com.br (Renato Serra Armani) Date: Wed, 26 Oct 2011 14:25:40 +0000 Subject: [Openstack-operators] GIT Version Message-ID: <5669DADFECCDF4468DA742C275FB64FF24B69852@DUALTEC-EXC-1A.dualtec.local> Hi everyone It is my first e-mail. My name is Renato S. Armani and I'm from Brazil. Since august with some folks from python brazilian comunity, and from other companies and initiatives here in Brazil we started to test openstack. Soon as possible we will get more know-how and I'll be glad to contribute highly with the Openstack community. My first question is: Today we are trying to install Openstack over Ubuntu using the scritpt from the Openstack Manual (git clone git://github.com/cloudbuilders/devstack.git...) My question is about the version of this git file. I checked out the version using "sudo nova-compute version list" and the displayed version is the "2012.1" researching on the web I understood that this version is related to the ESSEX release and not releated to the Diablo release that supposed to be 2011.3. I'm a little confused about it because I'm not understanding why is not the DIABLO release instead the ESSEX in the official script? I'll appreciate if anyone can explain this for me? Best Regards, Renato S. Armani -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Wed Oct 26 19:04:11 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 17:04:11 -0200 Subject: [Openstack-operators] OpenStack With KVM Message-ID: Hello, I installed a server with kvm and would like to know how to have the talk with this kvm OpenStack. What should I put in nova.conf? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From islamsh at indiana.edu Wed Oct 26 19:11:39 2011 From: islamsh at indiana.edu (Sharif Islam) Date: Wed, 26 Oct 2011 15:11:39 -0400 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: References: Message-ID: <4EA85B6B.5050201@indiana.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/26/2011 03:04 PM, Roberto Dalas Z. Benavides wrote: > Hello, I installed a server with kvm and would like to know how to have > the talk with this kvm OpenStack. What should I put in nova.conf? > > Thanks > - --libvirt_type=kvm should do the trick. - --sharif - -- Sharif Islam Senior Systems Analyst/Programmer FutureGrid (http://futuregrid.org) Pervasive Technology Institute, Indiana University Bloomington -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJOqFtrAAoJEACffes9SivFGNcIAKeHf3hBAMcWQinUN9dRvX4d wCG/mJhlpHB90rFPfvV6b2vDZQ2K0M62TAy6PfOoq5K4z+2oHHBQ4vNtUQCdZOM5 NCvXPsQTgEwRg6DCKM/obru8P8hQ4rqlTyF3AAretattzUuNbrCj7hOR1IrlqrlC qL+MN9Zv2BXTApiHyL7KMsJvK1b9MhD8Ww0oMlwKL7GXQzNn4JtDiCIKz0A1Louc HduNjw1aGuWGzWJ4ApOTLX1HBXPvfnJNlF9HLX8XsEF4/36bp4zEZNYmbqcHs80K 6mSSo0aOc31Jy/bicZX75t1dp0qt5sKQi0vTxd6E1mFYUdsMacx9rAOCwHpZxxs= =+wZA -----END PGP SIGNATURE----- From betodalas at gmail.com Wed Oct 26 19:19:08 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 17:19:08 -0200 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: <4EA85B6B.5050201@indiana.edu> References: <4EA85B6B.5050201@indiana.edu> Message-ID: But how will you know which server it should connect? and where it asks for a username and password? 2011/10/26 Sharif Islam > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 10/26/2011 03:04 PM, Roberto Dalas Z. Benavides wrote: > > Hello, I installed a server with kvm and would like to know how to have > > the talk with this kvm OpenStack. What should I put in nova.conf? > > > > Thanks > > > > - --libvirt_type=kvm > > should do the trick. > > - --sharif > > > - -- > Sharif Islam > Senior Systems Analyst/Programmer > FutureGrid (http://futuregrid.org) > Pervasive Technology Institute, Indiana University Bloomington > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJOqFtrAAoJEACffes9SivFGNcIAKeHf3hBAMcWQinUN9dRvX4d > wCG/mJhlpHB90rFPfvV6b2vDZQ2K0M62TAy6PfOoq5K4z+2oHHBQ4vNtUQCdZOM5 > NCvXPsQTgEwRg6DCKM/obru8P8hQ4rqlTyF3AAretattzUuNbrCj7hOR1IrlqrlC > qL+MN9Zv2BXTApiHyL7KMsJvK1b9MhD8Ww0oMlwKL7GXQzNn4JtDiCIKz0A1Louc > HduNjw1aGuWGzWJ4ApOTLX1HBXPvfnJNlF9HLX8XsEF4/36bp4zEZNYmbqcHs80K > 6mSSo0aOc31Jy/bicZX75t1dp0qt5sKQi0vTxd6E1mFYUdsMacx9rAOCwHpZxxs= > =+wZA > -----END PGP SIGNATURE----- > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From islamsh at indiana.edu Wed Oct 26 19:25:01 2011 From: islamsh at indiana.edu (Sharif Islam) Date: Wed, 26 Oct 2011 15:25:01 -0400 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: References: <4EA85B6B.5050201@indiana.edu> Message-ID: <4EA85E8D.5010205@indiana.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/26/2011 03:19 PM, Roberto Dalas Z. Benavides wrote: > But how will you know which server it should connect? and where it asks > for a username and password? How many servers do you have? If you have only one server then you will need to install all the nova services there -- nova-compute, nova-network etc. Otherwise, install a controller node with nova-network and nova-scheduler. And rest of the servers will only have nova-compute. And each of these compute nodes will have a nova.conf where you will define these flags: - --ec2_url=http://your_nova_controller_server_ip:8773/services/Cloud - --s3_host=your_nova_controller_server_ip - --cc_host=your_nova_controller_server_ip - --rabbit_host=your_nova_controller_server_ip - --network_host=your_nova_controller_server_ip I suggest your read the doc carefully, if you haven't already: http://docs.openstack.org/ And regarding password, usually VMs are booted up using ssh key so it won't need a password. - --sharif > > 2011/10/26 Sharif Islam > > > On 10/26/2011 03:04 PM, Roberto Dalas Z. Benavides wrote: >> Hello, I installed a server with kvm and would like to know how to > have >> the talk with this kvm OpenStack. What should I put in nova.conf? > >> Thanks > > > --libvirt_type=kvm > > should do the trick. > > --sharif > > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJOqF6NAAoJEACffes9SivF3p4H/jn7buudugiOZCx7wKqYop15 qPwwaEHITm7mz879BOxNWOHnMFexfjPn5dNebK+9+4WJTJTB6Wo5YddNxYKbytHa vuU9e3n9p8GHBO3UHdCvUbr9CPKCGUreMQeHpsVia37Y4rul+JD78jtGg1vl+P+N 6yPBrnsW5N2lAbhMMKFKp8tjErDGXa27dg0W5omnyKQ0puimysyyXspX63/HRSbO Bm3H/IQneQNtxK1QyQuGnsv7PYpOPVhWaTSqSe4kFw9+a3OYwUvYrByt6BH87wWI cOBCRJHL40hoXvo34fSm4qzi5Bv/KVJn90p+wPnLFIJYj4JUpQBXq27SpN1fecQ= =v8y1 -----END PGP SIGNATURE----- From betodalas at gmail.com Wed Oct 26 19:30:45 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 17:30:45 -0200 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: <4EA85E8D.5010205@indiana.edu> References: <4EA85B6B.5050201@indiana.edu> <4EA85E8D.5010205@indiana.edu> Message-ID: I have a server with all the nova services it and on another server I have installed kvm. As the new will know what he kvm server will create the machine in? For example: I use the vmware vmwareapi User data information and password. But in kvm? 2011/10/26 Sharif Islam > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 10/26/2011 03:19 PM, Roberto Dalas Z. Benavides wrote: > > > But how will you know which server it should connect? and where it asks > > for a username and password? > > > How many servers do you have? > > If you have only one server then you will need to install all the nova > services there -- nova-compute, nova-network etc. Otherwise, install a > controller node with nova-network and nova-scheduler. And rest of the > servers will only have nova-compute. > > And each of these compute nodes will have a nova.conf where you will > define these flags: > > - --ec2_url=http://your_nova_controller_server_ip:8773/services/Cloud > - --s3_host=your_nova_controller_server_ip > - --cc_host=your_nova_controller_server_ip > - --rabbit_host=your_nova_controller_server_ip > - --network_host=your_nova_controller_server_ip > > I suggest your read the doc carefully, if you haven't already: > http://docs.openstack.org/ > > And regarding password, usually VMs are booted up using ssh key so it > won't need a password. > > - --sharif > > > > > > > 2011/10/26 Sharif Islam >> > > > > On 10/26/2011 03:04 PM, Roberto Dalas Z. Benavides wrote: > >> Hello, I installed a server with kvm and would like to know how to > > have > >> the talk with this kvm OpenStack. What should I put in nova.conf? > > > >> Thanks > > > > > > --libvirt_type=kvm > > > > should do the trick. > > > > --sharif > > > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJOqF6NAAoJEACffes9SivF3p4H/jn7buudugiOZCx7wKqYop15 > qPwwaEHITm7mz879BOxNWOHnMFexfjPn5dNebK+9+4WJTJTB6Wo5YddNxYKbytHa > vuU9e3n9p8GHBO3UHdCvUbr9CPKCGUreMQeHpsVia37Y4rul+JD78jtGg1vl+P+N > 6yPBrnsW5N2lAbhMMKFKp8tjErDGXa27dg0W5omnyKQ0puimysyyXspX63/HRSbO > Bm3H/IQneQNtxK1QyQuGnsv7PYpOPVhWaTSqSe4kFw9+a3OYwUvYrByt6BH87wWI > cOBCRJHL40hoXvo34fSm4qzi5Bv/KVJn90p+wPnLFIJYj4JUpQBXq27SpN1fecQ= > =v8y1 > -----END PGP SIGNATURE----- > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From islamsh at indiana.edu Wed Oct 26 19:43:40 2011 From: islamsh at indiana.edu (Sharif Islam) Date: Wed, 26 Oct 2011 15:43:40 -0400 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: References: <4EA85B6B.5050201@indiana.edu> <4EA85E8D.5010205@indiana.edu> Message-ID: <4EA862EC.5070201@indiana.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/26/2011 03:30 PM, Roberto Dalas Z. Benavides wrote: > I have a server with all the novaservices it and on another server I > have installed kvm. > As the new will know what he kvm server will create the machine in? Ok. the server you have kvm, you will need to install nova-compute and in nova.conf file add --libvirt_type=kvm along with the other options. This way nova services will know which server to use. > For example: I use the vmware vmwareapi User data information and > password. But in kvm? I think this will depend how you create your images. You can add a local user in the image or as I mentioned before use a ssh key which will be injected by nova as it boots up. - --sharif -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJOqGLsAAoJEACffes9SivFZMQIALchlwetKfSF5NIT4P2EzK2d Gp7MTDLm77ATJ2XC2bhdZiHR64wdyC6ehjmHoyl5JBHcQWP6cECFuS93Yc1D8cc1 kmTKSNXtKxvn0eKxCPyARohIaJO2rXMHGEhZTr5amOx31uuebbAVpU+ONJkaw6zP nlNvNwqfxAefHicD3jMYY+PSrQWSRDy6oxWHh5ctNDtVF0b7o3jjY7D+RzhO2gNi dUuBHqsQQTiqmp5bRFQ0uh+nvPFTFEqazzpbS4uMRWRTXi2PVjWZLoBMZU9+Tl7g aRbpmBOdebhsaqsvYI2vKqzR5kXRdrulRpZUGUxHIEZW6XfItvBHnVEegxyLH8g= =aJjg -----END PGP SIGNATURE----- From betodalas at gmail.com Wed Oct 26 20:49:31 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 18:49:31 -0200 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: <4EA862EC.5070201@indiana.edu> References: <4EA85B6B.5050201@indiana.edu> <4EA85E8D.5010205@indiana.edu> <4EA862EC.5070201@indiana.edu> Message-ID: I installed kvm and nova computer in same machine, but occurred a error: 2011-10-26 18:47:05,936 ERROR nova.exception [-] Uncaught exception (nova.exception): TRACE: Traceback (most recent call last): (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/nova/exception .py", line 98, in wrapped (nova.exception): TRACE: return f(*args, **kw) (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/libv irt/connection.py", line 673, in spawn (nova.exception): TRACE: self.firewall_driver.setup_basic_filtering(instance , network_info) (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/libv irt/firewall.py", line 525, in setup_basic_filtering (nova.exception): TRACE: self.refresh_provider_fw_rules() (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/libv irt/firewall.py", line 737, in refresh_provider_fw_rules (nova.exception): TRACE: self._do_refresh_provider_fw_rules() (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/nova/utils.py" , line 687, in inner (nova.exception): TRACE: with lock: (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/lockfile.py", line 223, in __enter__ (nova.exception): TRACE: self.acquire() (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/lockfile.py", line 239, in acquire (nova.exception): TRACE: raise LockFailed("failed to create %s" % self.uniqu e_name) (nova.exception): TRACE: LockFailed: failed to create /usr/lib/python2.7/dist-pa ckages/OPSTACK-CTR-02.Dummy-9-23053 (nova.exception): TRACE: 2011-10-26 18:47:05,937 ERROR nova.compute.manager [-] Instance '10' failed to s pawn. Is virtualization enabled in the BIOS? Details: failed to create /usr/lib/ python2.7/dist-packages/OPSTACK-CTR-02.Dummy-9-23053 (nova.compute.manager): TRACE: Traceback (most recent call last): (nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/com pute/manager.py", line 424, in _run_instance (nova.compute.manager): TRACE: network_info, block_device_info) (nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/exc eption.py", line 129, in wrapped (nova.compute.manager): TRACE: raise Error(str(e)) (nova.compute.manager): TRACE: Error: failed to create /usr/lib/python2.7/dist-p ackages/OPSTACK-CTR-02.Dummy-9-23053 (nova.compute.manager): TRACE: 2011/10/26 Sharif Islam > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 10/26/2011 03:30 PM, Roberto Dalas Z. Benavides wrote: > > I have a server with all the novaservices it and on another server I > > have installed kvm. > > As the new will know what he kvm server will create the machine in? > > > Ok. the server you have kvm, you will need to install nova-compute and > in nova.conf file add --libvirt_type=kvm along with the other options. > This way nova services will know which server to use. > > > > For example: I use the vmware vmwareapi User data information and > > password. But in kvm? > > I think this will depend how you create your images. You can add a local > user in the image or as I mentioned before use a ssh key which will be > injected by nova as it boots up. > > - --sharif > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJOqGLsAAoJEACffes9SivFZMQIALchlwetKfSF5NIT4P2EzK2d > Gp7MTDLm77ATJ2XC2bhdZiHR64wdyC6ehjmHoyl5JBHcQWP6cECFuS93Yc1D8cc1 > kmTKSNXtKxvn0eKxCPyARohIaJO2rXMHGEhZTr5amOx31uuebbAVpU+ONJkaw6zP > nlNvNwqfxAefHicD3jMYY+PSrQWSRDy6oxWHh5ctNDtVF0b7o3jjY7D+RzhO2gNi > dUuBHqsQQTiqmp5bRFQ0uh+nvPFTFEqazzpbS4uMRWRTXi2PVjWZLoBMZU9+Tl7g > aRbpmBOdebhsaqsvYI2vKqzR5kXRdrulRpZUGUxHIEZW6XfItvBHnVEegxyLH8g= > =aJjg > -----END PGP SIGNATURE----- > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlan at bloomenterprises.org Thu Oct 27 01:02:57 2011 From: harlan at bloomenterprises.org (Harlan H. Bloom) Date: Wed, 26 Oct 2011 20:02:57 -0500 (CDT) Subject: [Openstack-operators] Installing dashboard - can't find Python.h In-Reply-To: <041b297c-eff9-4a35-b13e-26f4669a3764@starx2> Message-ID: Hello, I'm this is probably a newbie question, but I haven't been able to find an answer, in English anyways, for this error: Installing collected packages: xattr, pep8, pylint, coverage, glance, quantum, openstack, openstackx, python-novaclient, anyjson, amqplib, decorator, Tempita, greenlet, logilab-common, logilab-astng, httplib2, argparse, prettytable Running setup.py install for xattr building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 Complete output from command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7: running install running build running build_py running build_ext building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7 failed with error code 1 Storing complete log in /home/harlan/.pip/pip.log Command "/home/harlan/horizon/openstack-dashboard/tools/with_venv.sh pip install -E /home/harlan/horizon/openstack-dashboard/.dashboard-venv -r /home/harlan/horizon/openstack-dashboard/tools/pip-requires" failed. None I'm installing this on Ubuntu Server 11.10. Any ideas or suggestions? Please let me know if you need any more information. Thanks, Harlan... -------------- next part -------------- An HTML attachment was scrubbed... URL: From sateesh.chodapuneedi at citrix.com Thu Oct 27 02:01:38 2011 From: sateesh.chodapuneedi at citrix.com (Sateesh Chodapuneedi) Date: Thu, 27 Oct 2011 07:31:38 +0530 Subject: [Openstack-operators] Installing dashboard - can't find Python.h In-Reply-To: References: <041b297c-eff9-4a35-b13e-26f4669a3764@starx2> Message-ID: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C053@BANPMAILBOX01.citrite.net> Hi Harlan, You need to install libxml2 libxslt-dev. In Ubuntu, you can try apt-get install libxml2 libxslt-dev. Regards, Sateesh ---------------------------------------------------------------------------------------------------------------------------- "This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure, or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message." From: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] On Behalf Of Harlan H. Bloom Sent: Thursday, October 27, 2011 6:33 AM To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] Installing dashboard - can't find Python.h Hello, I'm this is probably a newbie question, but I haven't been able to find an answer, in English anyways, for this error: Installing collected packages: xattr, pep8, pylint, coverage, glance, quantum, openstack, openstackx, python-novaclient, anyjson, amqplib, decorator, Tempita, greenlet, logilab-common, logilab-astng, httplib2, argparse, prettytable Running setup.py install for xattr building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 Complete output from command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7: running install running build running build_py running build_ext building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7 failed with error code 1 Storing complete log in /home/harlan/.pip/pip.log Command "/home/harlan/horizon/openstack-dashboard/tools/with_venv.sh pip install -E /home/harlan/horizon/openstack-dashboard/.dashboard-venv -r /home/harlan/horizon/openstack-dashboard/tools/pip-requires" failed. None I'm installing this on Ubuntu Server 11.10. Any ideas or suggestions? Please let me know if you need any more information. Thanks, Harlan... -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: image001.gif URL: From harlan at bloomenterprises.org Thu Oct 27 02:18:41 2011 From: harlan at bloomenterprises.org (Harlan H. Bloom) Date: Wed, 26 Oct 2011 21:18:41 -0500 (CDT) Subject: [Openstack-operators] Installing dashboard - can't find Python.h In-Reply-To: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C053@BANPMAILBOX01.citrite.net> Message-ID: <52683164-39d6-44a7-b142-49bc4a9b2d53@starx2> Hi Sateesh, Deadsun suggested installing the python-dev package. And that worked. I was able to get the login page, but now I'm trying to figure out how to actually login to the dashboard. Thanks, Harlan... ----- Original Message ----- From: "Sateesh Chodapuneedi" To: "Harlan H. Bloom" , openstack-operators at lists.openstack.org Sent: Wednesday, October 26, 2011 9:01:38 PM Subject: RE: [Openstack-operators] Installing dashboard - can't find Python.h Hi Harlan, You need to install libxml2 libxslt-dev. In Ubuntu, you can try apt-get install libxml2 libxslt-dev. Regards, Sateesh ---------------------------------------------------------------------------------------------------------------------------- "This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure, or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message." Description: http://www6.integrityatwork.net/integrity/courses/ic1c/ic1cSTD/ic1cSTD_templates/shim.gif From: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] On Behalf Of Harlan H. Bloom Sent: Thursday, October 27, 2011 6:33 AM To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] Installing dashboard - can't find Python.h Hello, I'm this is probably a newbie question, but I haven't been able to find an answer, in English anyways, for this error: Installing collected packages: xattr, pep8, pylint, coverage, glance, quantum, openstack, openstackx, python-novaclient, anyjson, amqplib, decorator, Tempita, greenlet, logilab-common, logilab-astng, httplib2, argparse, prettytable Running setup.py install for xattr building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 Complete output from command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7: running install running build running build_py running build_ext building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7 failed with error code 1 Storing complete log in /home/harlan/.pip/pip.log Command "/home/harlan/horizon/openstack-dashboard/tools/with_venv.sh pip install -E /home/harlan/horizon/openstack-dashboard/.dashboard-venv -r /home/harlan/horizon/openstack-dashboard/tools/pip-requires" failed. None I'm installing this on Ubuntu Server 11.10. Any ideas or suggestions? Please let me know if you need any more information. Thanks, Harlan... -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: image001.gif URL: From betodalas at gmail.com Thu Oct 27 08:22:17 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 06:22:17 -0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: Thanks Giuseppe. I will try it. 2011/10/26 Giuseppe Civitella > I'm currently using Diablo > (2011.3-nova-milestone-tarball:tarmac-20110922115702-k9nkvxqzhj130av2) > and it works with XenServer 5.6 (it should with XCP 1.1 too). > I did non try yet Essex, sorry. > > > > 2011/10/26 Roberto Dalas Z. Benavides : > > My Openstack Versions is > > > > 2012.1-dev (2012.1-LOCALBRANCH:LOCALREVISION) > > > > Is correct? > > > > Which version is more stable? > > > > 2011/10/26 Giuseppe Civitella > >> > >> If imagem.vhd is a gzipped tar archive containing a file called > >> image.vhd, your command should work this way: > >> glance add name=lucid_ovf disk_format=vhd container_format=ovf > >> is_public=True < imagem.vhd > >> > >> > >> > >> 2011/10/26 Roberto Dalas Z. Benavides : > >> > Can i use the command ? > >> > > >> > add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf > > > >> > is_public True < imagem.vhd > >> > > >> > 2011/10/26 Giuseppe Civitella > >> >> > >> >> It has to be a vhd image. > >> >> You can try XenConverter to get a vhd from a vmdk. > >> >> > >> >> Cheers, > >> >> Giuseppe > >> >> > >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> > I have an image vmdk and am doing the following: > >> >> > add name = glance lucid_ovf disk_format container_format vhd = = = > >> >> > OVF > >> >> > is_public True image? > >> >> > > >> >> > Thanks > >> >> > > >> >> > 2011/10/26 Giuseppe Civitella > >> >> >> > >> >> >> Yes, the nova-compute service has to run on a domU. > >> >> >> You need to install XenServer's plugins on dom0 (have a look here: > >> >> >> http://wiki.openstack.org/XenServerDevelopment). > >> >> >> The domU will tell the dom0 to deploy images via xenapi. > >> >> >> You need to extract you vhd image, rename it image.vhd and then > gzip > >> >> >> it. > >> >> >> Glance plugin on XenServer expect vhd images to be gzipped, so if > >> >> >> you > >> >> >> don't compress them the deploy process will fail. > >> >> >> > >> >> >> Cheers, > >> >> >> Giuseppe > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> >> > A doubt, the new server, compute, must be within a XenServer > >> >> >> > virtual > >> >> >> > machine? > >> >> >> > The image must actually be as gzip, or you can get on the same > >> >> >> > Glance > >> >> >> > as > >> >> >> > vhd? > >> >> >> > > >> >> >> > 2011/10/26 Giuseppe Civitella > >> >> >> >> > >> >> >> >> Hi, > >> >> >> >> > >> >> >> >> did you check what happens on XenServer's dom0? > >> >> >> >> Are there some pending gzip processes? > >> >> >> >> Deploy of vhd images can fail if they're are not properly > >> >> >> >> created. > >> >> >> >> You can find the rigth procedure here: > >> >> >> >> https://answers.launchpad.net/nova/+question/161683 > >> >> >> >> > >> >> >> >> Hope it helps > >> >> >> >> Giuseppe > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> >> >> > Hello, I installed Compute and New Glance a separate server. > >> >> >> >> > I'm > >> >> >> >> > trying > >> >> >> >> > to > >> >> >> >> > create VM on Xen by Dashboard. The panel is the pending > status > >> >> >> >> > logs > >> >> >> >> > and > >> >> >> >> > shows that XenServer's picking up the image of the Glance, > but > >> >> >> >> > the > >> >> >> >> > machine > >> >> >> >> > is not created. Follow the log: > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > >> >> >> >> > R:cdbc860b307a|audit] Host.call_plugin host = > >> >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = > >> >> >> >> > 'download_vhd'; > >> >> >> >> > args > >> >> >> >> > = [ > >> >> >> >> > params: (dp0 > >> >> >> >> > S'auth_token' > >> >> >> >> > p1 > >> >> >> >> > NsS'glance_port' > >> >> >> >> > p2 > >> >> >> >> > I9292 > >> >> >> >> > sS'uuid_stack' > >> >> >> >> > p3 > >> >> >> >> > (lp4 > >> >> >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > >> >> >> >> > p5 > >> >> >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > >> >> >> >> > p6 > >> >> >> >> > asS'image_id' > >> >> >> >> > p7 > >> >> >> >> > S'4' > >> >> >> >> > p8 > >> >> >> >> > sS'glance_host' > >> >> >> >> > p9 > >> >> >> >> > S'10.168.1.30' > >> >> >> >> > p10 > >> >> >> >> > sS'sr_path' > >> >> >> >> > p11 > >> >> >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > >> >> >> >> > p12 > >> >> >> >> > s. ] > >> >> >> >> > [20111024T14:18:24.251Z| > >> >> >> >> > info|xenserver-opstack|746637|Async.host.call_plugin > >> >> >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle > the > >> >> >> >> > current > >> >> >> >> > task > >> >> >> >> > (tr > >> >> >> >> > ackid=a043138728544674d13b8d4a8ff673f7) > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > >> >> >> >> > R:223f6eebc13d|audit] Host.call_plugin host = > >> >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = > >> >> >> >> > 'host_data'; > >> >> >> >> > args > >> >> >> >> > = > >> >> >> >> > [ ] > >> >> >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 > >> >> >> >> > unix-RPC||cli] > >> >> >> >> > xe > >> >> >> >> > host-list username=root password=null > >> >> >> >> > > >> >> >> >> > Follow the nova.conf: > >> >> >> >> > > >> >> >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf > >> >> >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge > >> >> >> >> > --logdir=/var/log/nova > >> >> >> >> > --state_path=/var/lib/nova > >> >> >> >> > --lock_path=/var/lock/nova > >> >> >> >> > --verbose > >> >> >> >> > > >> >> >> >> > #--libvirt_type=xen > >> >> >> >> > --s3_host=10.168.1.32 > >> >> >> >> > --rabbit_host=10.168.1.32 > >> >> >> >> > --cc_host=10.168.1.32 > >> >> >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud > >> >> >> >> > --fixed_range=192.168.1.0/24 > >> >> >> >> > --network_size=250 > >> >> >> >> > --ec2_api=10.168.1.32 > >> >> >> >> > --routing_source_ip=10.168.1.32 > >> >> >> >> > --verbose > >> >> >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > >> >> >> >> > --network_manager=nova.network.manager.FlatManager > >> >> >> >> > --glance_api_servers=10.168.1.32:9292 > >> >> >> >> > --image_service=nova.image.glance.GlanceImageService > >> >> >> >> > --flat_network_bridge=xenbr0 > >> >> >> >> > --connection_type=xenapi > >> >> >> >> > --xenapi_connection_url=https://10.168.1.31 > >> >> >> >> > --xenapi_connection_username=root > >> >> >> >> > --xenapi_connection_password=status64 > >> >> >> >> > --reboot_timeout=600 > >> >> >> >> > --rescue_timeout=86400 > >> >> >> >> > --resize_confirm_window=86400 > >> >> >> >> > --allow_resize_to_same_host > >> >> >> >> > > >> >> >> >> > New log-in information compute.log shows cpu, memory, about > Xen > >> >> >> >> > Sevres, > >> >> >> >> > but > >> >> >> >> > does not create machines. > >> >> >> >> > > >> >> >> >> > Thanks > >> >> >> >> > _______________________________________________ > >> >> >> >> > Openstack-operators mailing list > >> >> >> >> > Openstack-operators at lists.openstack.org > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> >> >> >> > > >> >> >> >> > > >> >> >> > > >> >> >> > > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Thu Oct 27 08:25:05 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 06:25:05 -0200 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: <4EA862EC.5070201@indiana.edu> References: <4EA85B6B.5050201@indiana.edu> <4EA85E8D.5010205@indiana.edu> <4EA862EC.5070201@indiana.edu> Message-ID: Thanks Sharif. I got 2011/10/26 Sharif Islam > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 10/26/2011 03:30 PM, Roberto Dalas Z. Benavides wrote: > > I have a server with all the novaservices it and on another server I > > have installed kvm. > > As the new will know what he kvm server will create the machine in? > > > Ok. the server you have kvm, you will need to install nova-compute and > in nova.conf file add --libvirt_type=kvm along with the other options. > This way nova services will know which server to use. > > > > For example: I use the vmware vmwareapi User data information and > > password. But in kvm? > > I think this will depend how you create your images. You can add a local > user in the image or as I mentioned before use a ssh key which will be > injected by nova as it boots up. > > - --sharif > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJOqGLsAAoJEACffes9SivFZMQIALchlwetKfSF5NIT4P2EzK2d > Gp7MTDLm77ATJ2XC2bhdZiHR64wdyC6ehjmHoyl5JBHcQWP6cECFuS93Yc1D8cc1 > kmTKSNXtKxvn0eKxCPyARohIaJO2rXMHGEhZTr5amOx31uuebbAVpU+ONJkaw6zP > nlNvNwqfxAefHicD3jMYY+PSrQWSRDy6oxWHh5ctNDtVF0b7o3jjY7D+RzhO2gNi > dUuBHqsQQTiqmp5bRFQ0uh+nvPFTFEqazzpbS4uMRWRTXi2PVjWZLoBMZU9+Tl7g > aRbpmBOdebhsaqsvYI2vKqzR5kXRdrulRpZUGUxHIEZW6XfItvBHnVEegxyLH8g= > =aJjg > -----END PGP SIGNATURE----- > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Thu Oct 27 08:26:27 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 06:26:27 -0200 Subject: [Openstack-operators] nova-vnc proxy Message-ID: Hello, I could not start The New vncproxy in the error log shows: new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in (new): TRACE: host = FLAGS.vncproxy_host) (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", line 116, in start_tcp (new): TRACE: eventlet.listen socket = ((host, port), backlog = backlog) (new): TRACE: File "/ usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in listen (new): TRACE: sock.bind (addr) (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth (new): TRACE: return getattr (self._sock, name) (* args) (new): TRACE: error: [Errno 13] Permission denied what can be? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Thu Oct 27 08:55:55 2011 From: hyangii at gmail.com (Jae Sang Lee) Date: Thu, 27 Oct 2011 17:55:55 +0900 Subject: [Openstack-operators] nova-vnc proxy In-Reply-To: References: Message-ID: Hi, Maybe.. $ sudo start nova-vncproxy <--- this command was run by user 'nova' so, Try run nova-vncproxy by root. # nova-vncproxy & 2011/10/27 Roberto Dalas Z. Benavides > Hello, I could not start The New vncproxy in the error log shows: > > new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in > (new): TRACE: host = FLAGS.vncproxy_host) > (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", line > 116, in start_tcp > (new): TRACE: eventlet.listen socket = ((host, port), backlog = backlog) > (new): TRACE: File "/ > usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in > listen > (new): TRACE: sock.bind (addr) > (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth > (new): TRACE: return getattr (self._sock, name) (* args) > (new): TRACE: error: [Errno 13] Permission denied > what can be? > > Thanks > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Thu Oct 27 09:39:06 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 07:39:06 -0200 Subject: [Openstack-operators] nova-vnc proxy In-Reply-To: References: Message-ID: Very good work, but when I click on the dashboard vnc he tries to open the ip127.0.0.1 and not the server ip, you know where you configure it? 2011/10/27 Jae Sang Lee > Hi, > Maybe.. > $ sudo start nova-vncproxy <--- this command was run by user 'nova' > > so, Try run nova-vncproxy by root. > # nova-vncproxy & > > > 2011/10/27 Roberto Dalas Z. Benavides > >> Hello, I could not start The New vncproxy in the error log shows: >> >> new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in >> (new): TRACE: host = FLAGS.vncproxy_host) >> (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", line >> 116, in start_tcp >> (new): TRACE: eventlet.listen socket = ((host, port), backlog = backlog) >> (new): TRACE: File "/ >> usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in >> listen >> (new): TRACE: sock.bind (addr) >> (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth >> (new): TRACE: return getattr (self._sock, name) (* args) >> (new): TRACE: error: [Errno 13] Permission denied >> what can be? >> >> Thanks >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Thu Oct 27 16:12:01 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 14:12:01 -0200 Subject: [Openstack-operators] Multiple Hypervisors Message-ID: Hello, I wonder if I install a nova-controller and two nova-computers, with each nova-computer connected to a different hypervisor,. Being with one another with KVM and VMware. If I can, as I do that? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Thu Oct 27 16:59:52 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 14:59:52 -0200 Subject: [Openstack-operators] nova-vnc proxy In-Reply-To: References: Message-ID: I did it, but when I click on the "vnc" in the dashboard, he directs it to the ip 127.0.0.1:6080 / vnc_auto.html?-token = 9a4bcfa9 ...... I wonder where this ip changes. thank you 2011/10/27 Jae Sang Lee > Hi, > Maybe.. > $ sudo start nova-vncproxy <--- this command was run by user 'nova' > > so, Try run nova-vncproxy by root. > # nova-vncproxy & > > > 2011/10/27 Roberto Dalas Z. Benavides > >> Hello, I could not start The New vncproxy in the error log shows: >> >> new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in >> (new): TRACE: host = FLAGS.vncproxy_host) >> (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", line >> 116, in start_tcp >> (new): TRACE: eventlet.listen socket = ((host, port), backlog = backlog) >> (new): TRACE: File "/ >> usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in >> listen >> (new): TRACE: sock.bind (addr) >> (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth >> (new): TRACE: return getattr (self._sock, name) (* args) >> (new): TRACE: error: [Errno 13] Permission denied >> what can be? >> >> Thanks >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlan at bloomenterprises.org Thu Oct 27 17:28:35 2011 From: harlan at bloomenterprises.org (Harlan H. Bloom) Date: Thu, 27 Oct 2011 12:28:35 -0500 (CDT) Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: Message-ID: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Hello Everyone, I setup OpenStackDashboard according to the instructions on this wiki: http://wiki.openstack.org/OpenStackDashboard However, I can't seem to figure out how to login to the Dashboard. I've tried: User/pass: root/localpassword, admin/admin, admin/999888777666, other local unix usernames and passwords. I do have keystone setup and it appears to be running correctly. I'm running on Ubuntu Server 11.10. All of OpenStack is running on this computer; this is a test system until we get more comfortable with OpenStack before setting up the "real" hardware. I can create VM's from the command line and connect to them just fine. I only need to use the pem files created during OpenStack installation. We would very much prefer to use the website for most of our users. If you need any other information, please let me know. Thank you for your time and attention, Harlan... -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdshaonimran at gmail.com Thu Oct 27 17:59:12 2011 From: mdshaonimran at gmail.com (Shaon) Date: Thu, 27 Oct 2011 23:59:12 +0600 Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> References: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Message-ID: Login with the username/password you created during the nova installation. On Thu, Oct 27, 2011 at 11:28 PM, Harlan H. Bloom < harlan at bloomenterprises.org> wrote: > Hello Everyone, > I setup OpenStackDashboard according to the instructions on this wiki: > http://wiki.openstack.org/OpenStackDashboard > > However, I can't seem to figure out how to login to the Dashboard. I've > tried: > User/pass: root/localpassword, admin/admin, admin/999888777666, other > local unix usernames and passwords. > > I do have keystone setup and it appears to be running correctly. > > I'm running on Ubuntu Server 11.10. All of OpenStack is running on this > computer; this is a test system until we get more comfortable with OpenStack > before setting up the "real" hardware. > > I can create VM's from the command line and connect to them just fine. I > only need to use the pem files created during OpenStack installation. We > would very much prefer to use the website for most of our users. > > If you need any other information, please let me know. > > Thank you for your time and attention, > > Harlan... > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- thanks -shaon http://mdshaonimran.wordpress.com http://twitter.com/mdshaonimran -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Fri Oct 28 01:27:46 2011 From: hyangii at gmail.com (Jae Sang Lee) Date: Fri, 28 Oct 2011 10:27:46 +0900 Subject: [Openstack-operators] nova-vnc proxy In-Reply-To: References: Message-ID: You can change vncproxy host by nova.conf. --vncproxy_host= 2011/10/28 Roberto Dalas Z. Benavides > I did it, but when I click on the "vnc" in the dashboard, he directs it to > the ip 127.0.0.1:6080 / vnc_auto.html?-token = 9a4bcfa9 ...... > I wonder where this ip changes. > > thank you > > 2011/10/27 Jae Sang Lee > >> Hi, >> Maybe.. >> $ sudo start nova-vncproxy <--- this command was run by user 'nova' >> >> so, Try run nova-vncproxy by root. >> # nova-vncproxy & >> >> >> 2011/10/27 Roberto Dalas Z. Benavides >> >>> Hello, I could not start The New vncproxy in the error log shows: >>> >>> new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in >>> (new): TRACE: host = FLAGS.vncproxy_host) >>> (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", >>> line 116, in start_tcp >>> (new): TRACE: eventlet.listen socket = ((host, port), backlog = backlog) >>> (new): TRACE: File "/ >>> usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in >>> listen >>> (new): TRACE: sock.bind (addr) >>> (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth >>> (new): TRACE: return getattr (self._sock, name) (* args) >>> (new): TRACE: error: [Errno 13] Permission denied >>> what can be? >>> >>> Thanks >>> >>> _______________________________________________ >>> Openstack-operators mailing list >>> Openstack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Fri Oct 28 01:32:32 2011 From: hyangii at gmail.com (Jae Sang Lee) Date: Fri, 28 Oct 2011 10:32:32 +0900 Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: References: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Message-ID: You should login using keystone information. In Keystone DB, there are user information. If you run 'sampledata' when setup keystone, maybe admin user set password 'secrete' try to input 'admin/secrete' 2011/10/28 Shaon > Login with the username/password you created during the nova installation. > > On Thu, Oct 27, 2011 at 11:28 PM, Harlan H. Bloom < > harlan at bloomenterprises.org> wrote: > >> Hello Everyone, >> I setup OpenStackDashboard according to the instructions on this wiki: >> http://wiki.openstack.org/OpenStackDashboard >> >> However, I can't seem to figure out how to login to the Dashboard. I've >> tried: >> User/pass: root/localpassword, admin/admin, admin/999888777666, other >> local unix usernames and passwords. >> >> I do have keystone setup and it appears to be running correctly. >> >> I'm running on Ubuntu Server 11.10. All of OpenStack is running on this >> computer; this is a test system until we get more comfortable with OpenStack >> before setting up the "real" hardware. >> >> I can create VM's from the command line and connect to them just fine. >> I only need to use the pem files created during OpenStack installation. We >> would very much prefer to use the website for most of our users. >> >> If you need any other information, please let me know. >> >> Thank you for your time and attention, >> >> Harlan... >> >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > > -- > thanks > -shaon > > http://mdshaonimran.wordpress.com > http://twitter.com/mdshaonimran > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 08:24:42 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 06:24:42 -0200 Subject: [Openstack-operators] nova-vnc proxy In-Reply-To: References: Message-ID: Very good!! Thanks 2011/10/27 Jae Sang Lee > You can change vncproxy host by nova.conf. > --vncproxy_host= > > 2011/10/28 Roberto Dalas Z. Benavides > > I did it, but when I click on the "vnc" in the dashboard, he directs it to >> the ip 127.0.0.1:6080 / vnc_auto.html?-token = 9a4bcfa9 ...... >> I wonder where this ip changes. >> >> thank you >> >> 2011/10/27 Jae Sang Lee >> >>> Hi, >>> Maybe.. >>> $ sudo start nova-vncproxy <--- this command was run by user 'nova' >>> >>> so, Try run nova-vncproxy by root. >>> # nova-vncproxy & >>> >>> >>> 2011/10/27 Roberto Dalas Z. Benavides >>> >>>> Hello, I could not start The New vncproxy in the error log shows: >>>> >>>> new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in >>>> (new): TRACE: host = FLAGS.vncproxy_host) >>>> (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", >>>> line 116, in start_tcp >>>> (new): TRACE: eventlet.listen socket = ((host, port), backlog = >>>> backlog) >>>> (new): TRACE: File "/ >>>> usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in >>>> listen >>>> (new): TRACE: sock.bind (addr) >>>> (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth >>>> (new): TRACE: return getattr (self._sock, name) (* args) >>>> (new): TRACE: error: [Errno 13] Permission denied >>>> what can be? >>>> >>>> Thanks >>>> >>>> _______________________________________________ >>>> Openstack-operators mailing list >>>> Openstack-operators at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 08:28:34 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 06:28:34 -0200 Subject: [Openstack-operators] 1 controller and multiple hypervisors Message-ID: Hello, I have two server node and a server controller. Each node points to a KVM. I wonder how the installation of the servers will be made when I click on launch Dashboard. Is it random? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 08:33:04 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 06:33:04 -0200 Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> References: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Message-ID: Hi Harlan, use this document: http://cssoss.wordpress.com/2011/04/27/openstack-beginners-guide-for-ubuntu-11-04-installation-and-configuration/ Robert 2011/10/27 Harlan H. Bloom > Hello Everyone, > I setup OpenStackDashboard according to the instructions on this wiki: > http://wiki.openstack.org/OpenStackDashboard > > However, I can't seem to figure out how to login to the Dashboard. I've > tried: > User/pass: root/localpassword, admin/admin, admin/999888777666, other > local unix usernames and passwords. > > I do have keystone setup and it appears to be running correctly. > > I'm running on Ubuntu Server 11.10. All of OpenStack is running on this > computer; this is a test system until we get more comfortable with OpenStack > before setting up the "real" hardware. > > I can create VM's from the command line and connect to them just fine. I > only need to use the pem files created during OpenStack installation. We > would very much prefer to use the website for most of our users. > > If you need any other information, please let me know. > > Thank you for your time and attention, > > Harlan... > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From diego.parrilla at stackops.com Fri Oct 28 08:40:30 2011 From: diego.parrilla at stackops.com (Diego Parrilla) Date: Fri, 28 Oct 2011 10:40:30 +0200 Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: References: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Message-ID: On Fri, Oct 28, 2011 at 10:33 AM, Roberto Dalas Z. Benavides < betodalas at gmail.com> wrote: > Hi Harlan, use this document: > > > http://cssoss.wordpress.com/2011/04/27/openstack-beginners-guide-for-ubuntu-11-04-installation-and-configuration/ > > This is valid for Cactus versions of Nova and pre-keystone integration of the Dashboard. Diablo/stable and Essex branches need keystone for dashboard afaik. Diego > Robert > > 2011/10/27 Harlan H. Bloom > >> Hello Everyone, >> I setup OpenStackDashboard according to the instructions on this wiki: >> http://wiki.openstack.org/OpenStackDashboard >> >> However, I can't seem to figure out how to login to the Dashboard. I've >> tried: >> User/pass: root/localpassword, admin/admin, admin/999888777666, other >> local unix usernames and passwords. >> >> I do have keystone setup and it appears to be running correctly. >> >> I'm running on Ubuntu Server 11.10. All of OpenStack is running on this >> computer; this is a test system until we get more comfortable with OpenStack >> before setting up the "real" hardware. >> >> I can create VM's from the command line and connect to them just fine. >> I only need to use the pem files created during OpenStack installation. We >> would very much prefer to use the website for most of our users. >> >> If you need any other information, please let me know. >> >> Thank you for your time and attention, >> >> Harlan... >> >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 08:56:51 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 06:56:51 -0200 Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: References: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Message-ID: In my works. What is the difference? 2011/10/28 Diego Parrilla > > On Fri, Oct 28, 2011 at 10:33 AM, Roberto Dalas Z. Benavides < > betodalas at gmail.com> wrote: > >> Hi Harlan, use this document: >> >> >> http://cssoss.wordpress.com/2011/04/27/openstack-beginners-guide-for-ubuntu-11-04-installation-and-configuration/ >> >> > > This is valid for Cactus versions of Nova and pre-keystone integration of > the Dashboard. Diablo/stable and Essex branches need keystone for dashboard > afaik. > > Diego > > > >> Robert >> >> 2011/10/27 Harlan H. Bloom >> >>> Hello Everyone, >>> I setup OpenStackDashboard according to the instructions on this wiki: >>> http://wiki.openstack.org/OpenStackDashboard >>> >>> However, I can't seem to figure out how to login to the Dashboard. >>> I've tried: >>> User/pass: root/localpassword, admin/admin, admin/999888777666, >>> other local unix usernames and passwords. >>> >>> I do have keystone setup and it appears to be running correctly. >>> >>> I'm running on Ubuntu Server 11.10. All of OpenStack is running on >>> this computer; this is a test system until we get more comfortable with >>> OpenStack before setting up the "real" hardware. >>> >>> I can create VM's from the command line and connect to them just fine. >>> I only need to use the pem files created during OpenStack installation. We >>> would very much prefer to use the website for most of our users. >>> >>> If you need any other information, please let me know. >>> >>> Thank you for your time and attention, >>> >>> Harlan... >>> >>> >>> _______________________________________________ >>> Openstack-operators mailing list >>> Openstack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >>> >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 09:47:58 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 07:47:58 -0200 Subject: [Openstack-operators] Inject ip vmware Message-ID: Hello, I have a new machine with compute-and KVM. With the option - flat_injected = true nova.conf I can inject the ips on vms. In vmware is to do this or only works in KVM? Thank you very much -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 10:41:38 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 08:41:38 -0200 Subject: [Openstack-operators] Multiple nodes, priority Message-ID: Hello, I installed a cloud controller and two nodes with KVM. When I click install on the dashboard, it installs the kvm vms randomly at home. I wonder if it is to set prior to installation. thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From sateesh.chodapuneedi at citrix.com Fri Oct 28 11:57:24 2011 From: sateesh.chodapuneedi at citrix.com (Sateesh Chodapuneedi) Date: Fri, 28 Oct 2011 17:27:24 +0530 Subject: [Openstack-operators] Inject ip vmware In-Reply-To: References: Message-ID: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C246@BANPMAILBOX01.citrite.net> Yes, the flag (flat_injected = true) works for nova vmware driver too. Regards, Sateesh ---------------------------------------------------------------------------------------------------------------------------- "This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure, or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message." From: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] On Behalf Of Roberto Dalas Z. Benavides Sent: Friday, October 28, 2011 3:18 PM To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] Inject ip vmware Hello, I have a new machine with compute-and KVM. With the option - flat_injected = true nova.conf I can inject the ips on vms. In vmware is to do this or only works in KVM? Thank you very much -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: image001.gif URL: From betodalas at gmail.com Fri Oct 28 12:02:57 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 10:02:57 -0200 Subject: [Openstack-operators] Inject ip vmware In-Reply-To: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C246@BANPMAILBOX01.citrite.net> References: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C246@BANPMAILBOX01.citrite.net> Message-ID: Not the iamge injecting ip, ip runs out. My nova.conf looks like: --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --verbose --libvirt_type=qemu #--lock_path=/tmp --connection_type=libvirt --s3_host=10.168.1.4 --rabbit_host=10.168.1.4 --cc_host=10.168.1.4 --ec2_url=http://10.168.1.4:8773/services/Cloud --fixed_range=192.168.1.0/24 --network_size=250 --ec2_api=10.168.1.4 --routing_source_ip=10.168.1.4 --verbose --sql_connection=mysql://root:status64 at 10.168.1.4/nova --network_manager=nova.network.manager.FlatManager --glance_api_servers=10.168.1.30:9292 --image_service=nova.image.glance.GlanceImageService --flat_interface=eth0 --flat_injected=true --connection_type=vmwareapi --vmwareapi_host_ip=10.168.1.7:443 --vmwareapi_host_username=root --vmwareapi_host_password=status64 --vmwareapi_wsdl_loc=http://10.168.1.4/vimService.wsdl --vncproxy_url=http://10.168.1.31:6080 --vncproxy_host=10.168.1.4 --vncproxy_port=6080 --vnc_console_proxy_url=http://10.168.1.31:6080 --vnc_enabled=True #--ajax_console_proxy_url=http://10.168.1.4:8000 #--ajax_console_proxy_port=8000 need anything else? 2011/10/28 Sateesh Chodapuneedi > Yes, the flag (flat_injected = true) works for nova vmware driver too.**** > > ** ** > > Regards,**** > > Sateesh**** > > ** ** > > > ---------------------------------------------------------------------------------------------------------------------------- > **** > > "This e-mail message is for the sole use of the intended recipient(s) and > may contain confidential and/or privileged information. Any unauthorized > review, use, disclosure, or distribution is prohibited. If you are not the > intended recipient, please contact the sender by reply e-mail and destroy > all copies of the original message." > [image: Description: > http://www6.integrityatwork.net/integrity/courses/ic1c/ic1cSTD/ic1cSTD_templates/shim.gif] > **** > > ** ** > > *From:* openstack-operators-bounces at lists.openstack.org [mailto: > openstack-operators-bounces at lists.openstack.org] *On Behalf Of *Roberto > Dalas Z. Benavides > *Sent:* Friday, October 28, 2011 3:18 PM > *To:* openstack-operators at lists.openstack.org > *Subject:* [Openstack-operators] Inject ip vmware**** > > ** ** > > Hello, I have a new machine with compute-and KVM. With the option - > flat_injected = true nova.conf I can inject the ips on vms. > In vmware is to do this or only works in KVM? > > Thank you very much **** > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: not available URL: From diego.parrilla.santamaria at gmail.com Fri Oct 28 12:06:16 2011 From: diego.parrilla.santamaria at gmail.com (=?ISO-8859-1?Q?Diego_Parrilla_Santamar=EDa?=) Date: Fri, 28 Oct 2011 14:06:16 +0200 Subject: [Openstack-operators] Inject ip vmware In-Reply-To: References: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C246@BANPMAILBOX01.citrite.net> Message-ID: It seems --connection_type is twice in the file: On Fri, Oct 28, 2011 at 2:02 PM, Roberto Dalas Z. Benavides < betodalas at gmail.com> wrote: > Not the iamge injecting ip, ip runs out. My nova.conf looks like: > > > --dhcpbridge_flagfile=/etc/nova/nova.conf > --dhcpbridge=/usr/bin/nova-dhcpbridge > --logdir=/var/log/nova > --state_path=/var/lib/nova > --verbose > --libvirt_type=qemu > #--lock_path=/tmp > --connection_type=libvirt > > --s3_host=10.168.1.4 > --rabbit_host=10.168.1.4 > --cc_host=10.168.1.4 > --ec2_url=http://10.168.1.4:8773/services/Cloud > --fixed_range=192.168.1.0/24 > --network_size=250 > --ec2_api=10.168.1.4 > --routing_source_ip=10.168.1.4 > --verbose > --sql_connection=mysql://root:status64 at 10.168.1.4/nova > --network_manager=nova.network.manager.FlatManager > --glance_api_servers=10.168.1.30:9292 > --image_service=nova.image.glance.GlanceImageService > --flat_interface=eth0 > --flat_injected=true > --connection_type=vmwareapi > --vmwareapi_host_ip=10.168.1.7:443 > --vmwareapi_host_username=root > --vmwareapi_host_password=status64 > --vmwareapi_wsdl_loc=http://10.168.1.4/vimService.wsdl > > --vncproxy_url=http://10.168.1.31:6080 > --vncproxy_host=10.168.1.4 > --vncproxy_port=6080 > --vnc_console_proxy_url=http://10.168.1.31:6080 > --vnc_enabled=True > > #--ajax_console_proxy_url=http://10.168.1.4:8000 > #--ajax_console_proxy_port=8000 > > need anything else? > > 2011/10/28 Sateesh Chodapuneedi > >> Yes, the flag (flat_injected = true) works for nova vmware driver too.*** >> * >> >> ** ** >> >> Regards,**** >> >> Sateesh**** >> >> ** ** >> >> >> ---------------------------------------------------------------------------------------------------------------------------- >> **** >> >> "This e-mail message is for the sole use of the intended recipient(s) and >> may contain confidential and/or privileged information. Any unauthorized >> review, use, disclosure, or distribution is prohibited. If you are not the >> intended recipient, please contact the sender by reply e-mail and destroy >> all copies of the original message." >> [image: Description: >> http://www6.integrityatwork.net/integrity/courses/ic1c/ic1cSTD/ic1cSTD_templates/shim.gif] >> **** >> >> ** ** >> >> *From:* openstack-operators-bounces at lists.openstack.org [mailto: >> openstack-operators-bounces at lists.openstack.org] *On Behalf Of *Roberto >> Dalas Z. Benavides >> *Sent:* Friday, October 28, 2011 3:18 PM >> *To:* openstack-operators at lists.openstack.org >> *Subject:* [Openstack-operators] Inject ip vmware**** >> >> ** ** >> >> Hello, I have a new machine with compute-and KVM. With the option - >> flat_injected = true nova.conf I can inject the ips on vms. >> In vmware is to do this or only works in KVM? >> >> Thank you very much **** >> > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: not available URL: From betodalas at gmail.com Fri Oct 28 12:22:20 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 10:22:20 -0200 Subject: [Openstack-operators] Inject ip vmware In-Reply-To: References: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C246@BANPMAILBOX01.citrite.net> Message-ID: I took this IPCA but not injected into the network card. Eth0 is the only inet6 addr. Thanks 2011/10/28 Diego Parrilla Santamar?a > It seems --connection_type is twice in the file: > > On Fri, Oct 28, 2011 at 2:02 PM, Roberto Dalas Z. Benavides < > betodalas at gmail.com> wrote: > >> Not the iamge injecting ip, ip runs out. My nova.conf looks like: >> >> >> --dhcpbridge_flagfile=/etc/nova/nova.conf >> --dhcpbridge=/usr/bin/nova-dhcpbridge >> --logdir=/var/log/nova >> --state_path=/var/lib/nova >> --verbose >> --libvirt_type=qemu >> #--lock_path=/tmp >> --connection_type=libvirt >> >> --s3_host=10.168.1.4 >> --rabbit_host=10.168.1.4 >> --cc_host=10.168.1.4 >> --ec2_url=http://10.168.1.4:8773/services/Cloud >> --fixed_range=192.168.1.0/24 >> --network_size=250 >> --ec2_api=10.168.1.4 >> --routing_source_ip=10.168.1.4 >> --verbose >> --sql_connection=mysql://root:status64 at 10.168.1.4/nova >> --network_manager=nova.network.manager.FlatManager >> --glance_api_servers=10.168.1.30:9292 >> --image_service=nova.image.glance.GlanceImageService >> --flat_interface=eth0 >> --flat_injected=true >> --connection_type=vmwareapi >> --vmwareapi_host_ip=10.168.1.7:443 >> --vmwareapi_host_username=root >> --vmwareapi_host_password=status64 >> --vmwareapi_wsdl_loc=http://10.168.1.4/vimService.wsdl >> >> --vncproxy_url=http://10.168.1.31:6080 >> --vncproxy_host=10.168.1.4 >> --vncproxy_port=6080 >> --vnc_console_proxy_url=http://10.168.1.31:6080 >> --vnc_enabled=True >> >> #--ajax_console_proxy_url=http://10.168.1.4:8000 >> #--ajax_console_proxy_port=8000 >> >> need anything else? >> >> 2011/10/28 Sateesh Chodapuneedi >> >>> Yes, the flag (flat_injected = true) works for nova vmware driver too.** >>> ** >>> >>> ** ** >>> >>> Regards,**** >>> >>> Sateesh**** >>> >>> ** ** >>> >>> >>> ---------------------------------------------------------------------------------------------------------------------------- >>> **** >>> >>> "This e-mail message is for the sole use of the intended recipient(s) and >>> may contain confidential and/or privileged information. Any unauthorized >>> review, use, disclosure, or distribution is prohibited. If you are not the >>> intended recipient, please contact the sender by reply e-mail and destroy >>> all copies of the original message." >>> [image: Description: >>> http://www6.integrityatwork.net/integrity/courses/ic1c/ic1cSTD/ic1cSTD_templates/shim.gif] >>> **** >>> >>> ** ** >>> >>> *From:* openstack-operators-bounces at lists.openstack.org [mailto: >>> openstack-operators-bounces at lists.openstack.org] *On Behalf Of *Roberto >>> Dalas Z. Benavides >>> *Sent:* Friday, October 28, 2011 3:18 PM >>> *To:* openstack-operators at lists.openstack.org >>> *Subject:* [Openstack-operators] Inject ip vmware**** >>> >>> ** ** >>> >>> Hello, I have a new machine with compute-and KVM. With the option - >>> flat_injected = true nova.conf I can inject the ips on vms. >>> In vmware is to do this or only works in KVM? >>> >>> Thank you very much **** >>> >> >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: not available URL: From betodalas at gmail.com Fri Oct 28 13:58:45 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 11:58:45 -0200 Subject: [Openstack-operators] Criteria for distribution of vm Message-ID: Hello, what is the criteria for installation of machines between hypervisors vm? Availability of resources? -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Mon Oct 31 09:45:59 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Mon, 31 Oct 2011 07:45:59 -0200 Subject: [Openstack-operators] Dashboard + Keystone Message-ID: Hello everyone, I'm trying to access the dashboard using the sta Keystone and giving the error: [31/Oct/2011 07:06:07] "POST / auth / login /? Next = / dash / HTTP/1.1" 200 1363 [31/Oct/2011 07:17:41] "GET / auth / login /? Next = / dash / HTTP/1.1" 200 1228 DEBUG: novaclient.client: REQ: http://10.168.1.4:5000/v2.0/tokens curl-i-X POST-H "Content-Type: application / json"-H "User-Agent: python-novaclient " DEBUG: novaclient.client: BODY REQ: {"auth": {"passwordCredentials": { "username": "dualtec", "password": "status64"}}} DEBUG: novaclient.client: RESP: {'status': '400', 'content-length': 24, 'content-ty pe': 'text / plain'} [Errno 111] ECONNREFUSED When I give the command / etc / init.d / keystone start but it shows that started falling again. Does anyone know what can be? thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From andi.abes at gmail.com Thu Oct 20 15:42:46 2011 From: andi.abes at gmail.com (andi abes) Date: Thu, 20 Oct 2011 15:42:46 -0000 Subject: [Openstack-operators] swift proxy server problem In-Reply-To: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C92@EXMB01CMS.surrey.ac.uk> References: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C92@EXMB01CMS.surrey.ac.uk> Message-ID: on quick look, it seems that your proxy past config is a bit off. would be useful to include your proxy config file, as well as version info (cactus / diablo, milestone.. etc) As a side note - rather than try to install from scratch manually, look for the some of existing deployment scripts out there. There are some using Chef, some puppet and some more comprehensive. a. On Thu, Oct 20, 2011 at 11:14 AM, wrote: > Hi All, > > I'm trying to set up swift and am having an issue with getting the proxy > service to start, after a > swift-init proxy start > > the proxy does not start and I see this in the logs: > > Oct 20 16:12:14 storage05 proxy-server UNCAUGHT EXCEPTION#012Traceback > (most recent call last):#012 File "/usr/bin/swift-proxy-server", line 22, > in #012 run_wsgi(conf_file, 'proxy-server', default_port=8080, > **options)#012 File "/usr/lib/pymodules/python2.6/swift/common/wsgi.py", > line 126, in run_wsgi#012 app = loadapp('config:%s' % conf_file, > global_conf={'log_name': log_name})#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 204, in > loadapp#012 return loadobj(APP, uri, name=name, **kw)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 224, in > loadobj#012 global_conf=global_conf)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in > loadcontext#012 global_conf=global_conf)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 278, in > _loadconfig#012 return loader.get_context(object_type, name, > global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/l > oadwsgi.py", line 405, in get_context#012 > global_additions=global_additions)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 503, in > _pipeline_app_context#012 for name in pipeline[:-1]]#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 409, in > get_context#012 section)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 431, in > _context_from_use#012 object_type, name=use, global_conf=global_conf)#012 > File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 361, in > get_context#012 global_conf=global_conf)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in > loadcontext#012 global_conf=global_conf)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 285, in > _loadegg#012 return loader.get_context(object_type, name, > global_conf)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 561, in > get_context#012 object_type > , name=name)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 587, > > > Any help appreciated. > > Regards > > John O'Loughlin > FEPS IT, Service Delivery Team Leader > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akapadia_usa at yahoo.com Mon Oct 3 21:44:55 2011 From: akapadia_usa at yahoo.com (Amar Kapadia) Date: Mon, 3 Oct 2011 14:44:55 -0700 (PDT) Subject: [Openstack-operators] Swift Install "Check that you can HEAD the account" step fails Message-ID: <1317678295.35506.YahooMailNeo@web161717.mail.bf1.yahoo.com> Hi, I am using the instructions on this page:?http://swift.openstack.org/howto_installmultinode.html. I assume this works for Diablo release of Swift as well. I am on STEP 2 "Check that you can HEAD the account" of the "Create Swift admin account and test" step i.e.:? curl-k -v -H 'X-Auth-Token: ' I did this and replaced the token & URL as per below. However the command failed.? Any ideas how I can fix this? Thanks, in advance. Regards, Amar === curl -k -v -H 'X-Auth-Token: AUTH_tk9dac19fd2c87484bb7c9e779b0478968'?https://$PROXY_LOCAL_NET_IP:8080/v1/AUTH_system/ * About to connect() to 10.10.10.61 port 8080 (#0) *? Trying 10.10.10.61... connected * Connected to 10.10.10.61 (10.10.10.61) port 8080 (#0) * successfully set certificate verify locations: *? CAfile: none ? CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using AES256-SHA * Server certificate: *? ? ? subject: C=US; ST=CA; L=San Francisco; O=UL; emailAddress=xxx at yyy.com *? ? ? start date:?2011-10-02 09:03:42 GMT *? ? ? expire date:?2011-11-01 09:03:42 GMT * SSL: unable to obtain common name from peer certificate > GET /v1/AUTH_system/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: 10.10.10.61:8080 > Accept: */* > X-Auth-Token: AUTH_tk9dac19fd2c87484bb7c9e779b0478968 > < HTTP/1.1 401 Unauthorized < Content-Length: 358 < Content-Type: text/html; charset=UTF-8 < Date:?Mon, 03 Oct 2011 21:13:58 GMT < 401 Unauthorized

401 Unauthorized

? This server could not verify that you are authorized to access the document you requested. Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to supply the credentials required.

-------------- next part -------------- An HTML attachment was scrubbed... URL: From akapadia_usa at yahoo.com Sun Oct 9 18:35:50 2011 From: akapadia_usa at yahoo.com (Amar Kapadia) Date: Sun, 9 Oct 2011 11:35:50 -0700 (PDT) Subject: [Openstack-operators] (no subject) Message-ID: <1318185350.24665.YahooMailNeo@web161701.mail.bf1.yahoo.com> I've finished installing Swift on 6 EC2 nodes, but I'm struggling on this seemingly simple step: http://docs.openstack.org/diablo/openstack-object-storage/admin/content/part-i-setting-up-secure-access.html Some quick questions: 1. I'm probably missing something obvious but where do I get the "swift" tool from? 2. Also, do these below iptables look OK? 3. Finally, do I have to restart some service to have the new iptables read?? Thanks, Amar Chain INPUT (policy ACCEPT 454 packets, 36014 bytes) ?pkts bytes target ? ? prot opt in ? ? out ? ? source ? ? ? ? ? ? ? destination 45287 4651K ACCEPT ? ? all ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?ctstate RELATED,ESTABLISHED ? ? 0 ? ? 0 ACCEPT ? ? all ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?state RELATED,ESTABLISHED ?3505 ?210K ACCEPT ? ? tcp ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?tcp dpt:ssh ? ?12 ? 720 ACCEPT ? ? tcp ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?tcp dpt:https ? ? 0 ? ? 0 LOG ? ? ? ?all ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?limit: avg 5/min burst 5 LOG level debug prefix `iptables denied: ' ? ? 7 ? 408 ACCEPT ? ? tcp ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?tcp dpt:www ? ? 0 ? ? 0 ACCEPT ? ? tcp ?-- ?any ? ?any ? ? anywhere ? ? ? ? ? ? anywhere ? ? ? ? ? ?tcp dpt:https Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) ?pkts bytes target ? ? prot opt in ? ? out ? ? source ? ? ? ? ? ? ? destination Chain OUTPUT (policy ACCEPT 48502 packets, 8622K bytes) ?pkts bytes target ? ? prot opt in ? ? out ? ? source ? ? ? ? ? ? ? destination -------------- next part -------------- An HTML attachment was scrubbed... URL: From linuxdatacenter at gmail.com Mon Oct 10 09:33:57 2011 From: linuxdatacenter at gmail.com (Linux Datacenter) Date: Mon, 10 Oct 2011 11:33:57 +0200 Subject: [Openstack-operators] Rabbitmq Message-ID: Hi, After I upgraded to diablo, I see a dramatic slowdown when launching and terminating vm-s. When I submit creation of 10 or more vm-s, nova headnode almost freezes. It takes around 3 minutes for the vm-s to spawn. Also euca-terminate-instances deletes instances with lags (about a minute) when destroying 10 or more machines. I also observe instability in rabbitmq server. It freezes occasionally and I need to restart rabbitmq server, nova-api, nova-scheduler to make the whole thing work again. I did not have any of these with cactus release. Has anybody run into such issues as mine with diablo? Do you have a remedy for this? Cheers, -Piotr -- checkout my blog on linux clusters: -- linuxdatacenter.blogspot.com -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From slyphon at gmail.com Wed Oct 12 15:08:01 2011 From: slyphon at gmail.com (Jonathan Simms) Date: Wed, 12 Oct 2011 11:08:01 -0400 Subject: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift Message-ID: Hello all, I'm in the middle of a 120T Swift deployment, and I've had some concerns about the backing filesystem. I formatted everything with ext4 with 1024b inodes (for storing xattrs), but the process took so long that I'm now looking at XFS again. In particular, this concerns me http://xfs.org/index.php/XFS_FAQ#Write_barrier_support. In the swift documentation, it's recommended to mount the filesystems w/ 'nobarrier', but it would seem to me that this would leave the data open to corruption in the case of a crash. AFAIK, swift doesn't do checksumming (and checksum checking) of stored data (after it is written), which would mean that any data corruption would silently get passed back to the users. Now, I haven't had operational experience running XFS in production, I've mainly used ZFS, JFS, and ext{3,4}. Are there any recommendations for using XFS safely in production? From btorch-os at zeroaccess.org Thu Oct 13 16:18:21 2011 From: btorch-os at zeroaccess.org (Marcelo Martins) Date: Thu, 13 Oct 2011 11:18:21 -0500 Subject: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift In-Reply-To: References: Message-ID: <395F6A92-D224-4A3D-BEC5-87625204DC93@zeroaccess.org> Hi Jonathan, I guess that will depend on how your storage nodes are configured (hardware wise). The reason why it's recommended is because the storage drives are actually attached to a controller that has RiW cache enabled. Q. Should barriers be enabled with storage which has a persistent write cache? Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with "nobarrier". But take care about the hard disk write cache, which should be off. Marcelo Martins Openstack-swift btorch-os at zeroaccess.org ?Knowledge is the wings on which our aspirations take flight and soar. When it comes to surfing and life if you know what to do you can do it. If you desire anything become educated about it and succeed. ? On Oct 12, 2011, at 10:08 AM, Jonathan Simms wrote: > Hello all, > > I'm in the middle of a 120T Swift deployment, and I've had some > concerns about the backing filesystem. I formatted everything with > ext4 with 1024b inodes (for storing xattrs), but the process took so > long that I'm now looking at XFS again. In particular, this concerns > me http://xfs.org/index.php/XFS_FAQ#Write_barrier_support. > > In the swift documentation, it's recommended to mount the filesystems > w/ 'nobarrier', but it would seem to me that this would leave the data > open to corruption in the case of a crash. AFAIK, swift doesn't do > checksumming (and checksum checking) of stored data (after it is > written), which would mean that any data corruption would silently get > passed back to the users. > > Now, I haven't had operational experience running XFS in production, > I've mainly used ZFS, JFS, and ext{3,4}. Are there any recommendations > for using XFS safely in production? > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From linuxcole at gmail.com Thu Oct 13 20:50:36 2011 From: linuxcole at gmail.com (Cole Crawford) Date: Thu, 13 Oct 2011 13:50:36 -0700 Subject: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift In-Reply-To: <395F6A92-D224-4A3D-BEC5-87625204DC93@zeroaccess.org> References: <395F6A92-D224-4A3D-BEC5-87625204DC93@zeroaccess.org> Message-ID: generally mounting with -o nobarrier is a bad idea (ext4 or xfs), unless you have disks that do not have write caches. don't follow that recommendation, or for example - fsync won't work which is something swift relies upon. On Thu, Oct 13, 2011 at 9:18 AM, Marcelo Martins wrote: > Hi Jonathan, > > > I guess that will depend on how your storage nodes are configured (hardware > wise). The reason why it's recommended is because the storage drives are > actually attached to a controller that has RiW cache enabled. > > > > Q. Should barriers be enabled with storage which has a persistent write > cache? > Many hardware RAID have a persistent write cache which preserves it across > power failure, interface resets, system crashes, etc. Using write barriers > in this instance is not recommended and will in fact lower performance. > Therefore, it is recommended to turn off the barrier support and mount the > filesystem with "nobarrier". But take care about the hard disk write cache, > which should be off. > > > Marcelo Martins > Openstack-swift > btorch-os at zeroaccess.org > > ?Knowledge is the wings on which our aspirations take flight and soar. When > it comes to surfing and life if you know what to do you can do it. If you > desire anything become educated about it and succeed. ? > > > > > On Oct 12, 2011, at 10:08 AM, Jonathan Simms wrote: > > Hello all, > > I'm in the middle of a 120T Swift deployment, and I've had some > concerns about the backing filesystem. I formatted everything with > ext4 with 1024b inodes (for storing xattrs), but the process took so > long that I'm now looking at XFS again. In particular, this concerns > me http://xfs.org/index.php/XFS_FAQ#Write_barrier_support. > > In the swift documentation, it's recommended to mount the filesystems > w/ 'nobarrier', but it would seem to me that this would leave the data > open to corruption in the case of a crash. AFAIK, swift doesn't do > checksumming (and checksum checking) of stored data (after it is > written), which would mean that any data corruption would silently get > passed back to the users. > > Now, I haven't had operational experience running XFS in production, > I've mainly used ZFS, JFS, and ext{3,4}. Are there any recommendations > for using XFS safely in production? > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.irving at sophos.com Thu Oct 13 22:11:59 2011 From: gordon.irving at sophos.com (Gordon Irving) Date: Thu, 13 Oct 2011 18:11:59 -0400 Subject: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift In-Reply-To: References: <395F6A92-D224-4A3D-BEC5-87625204DC93@zeroaccess.org> Message-ID: If you are on a Battery Backed Unit raid controller, then its generally safe to disable barriers for journal filesystems. If your doing soft raid, jbod, single disk arrays or cheaped out and did not get a BBU then you may want to enable barriers for filesystem consistency. For raid cards with a BBU then set your io scheduler to noop, and disable barriers. The raid card does its own re-ordering of io operations, the OS has an incomplete picture of the true drive geometry. The raid card is emulating one disk geometry which could be an array of 2 - 100+ disks. The OS simply can not make good judgment calls on how best to schedule io to different parts of the disk because its built around the assumption of a single spinning disk. This is also true for if a write has made it safely non persistent cache (ie disk cache), to a persistent cache (ie the battery in your raid card) or persistent storage (that array of disks) . This is a failure of the Raid card <-> OS interface. There simply is not the richness to say (signal write is ok if on platter or persistent cache not okay in disk cache) or Enabling barriers effectively turns all writes into Write-Through operations, so the write goes straight to the disk platter and you get little performance benefit from the raid card (which hurts a lot in terms of lost iops). If the BBU looses charge/fails then the raid controller downgrades to Write-Through (vs Write-Backed) operation. BBU raid controllers disable disk caches, as these are not safe in event of power loss, and do not provide any benefit over the raid card cache. In the context of swift, hdfs and other highly replicated datastores, I run them in jbod or raid-0 + nobarrier , noatime, nodiratime with a filesystem aligned to the geometry of underlying storage* etc to squeeze as much performance as possible out of the raw storage. Let the application layer deal with redundancy of data across the network, if a machine /disk dies ... so what, you have N other copies of that data elsewhere on the network. A bit of storage is lost ... do consider how many nodes can be down at any time when operating these sorts of clusters Big boxen with lots of storage may seem attractive from a density perspective until you loose one and 25% of your storage capacity with it ... many smaller baskets ... For network level data consistency swift should have a data scrubber (periodic process to read and compare checksums of replicated blocks), I have not checked if this is implemented or on the roadmap. I would be very surprised if this was not a part of swift. *you can hint to the fs layer how to offset block writes by specifying a stride width which is the number of data carrying disks in the array and the block size typically the default is 64k for raid arrays From: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] On Behalf Of Cole Crawford Sent: 13 October 2011 13:51 To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift generally mounting with -o nobarrier is a bad idea (ext4 or xfs), unless you have disks that do not have write caches. don't follow that recommendation, or for example - fsync won't work which is something swift relies upon. On Thu, Oct 13, 2011 at 9:18 AM, Marcelo Martins > wrote: Hi Jonathan, I guess that will depend on how your storage nodes are configured (hardware wise). The reason why it's recommended is because the storage drives are actually attached to a controller that has RiW cache enabled. Q. Should barriers be enabled with storage which has a persistent write cache? Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with "nobarrier". But take care about the hard disk write cache, which should be off. Marcelo Martins Openstack-swift btorch-os at zeroaccess.org "Knowledge is the wings on which our aspirations take flight and soar. When it comes to surfing and life if you know what to do you can do it. If you desire anything become educated about it and succeed. " On Oct 12, 2011, at 10:08 AM, Jonathan Simms wrote: Hello all, I'm in the middle of a 120T Swift deployment, and I've had some concerns about the backing filesystem. I formatted everything with ext4 with 1024b inodes (for storing xattrs), but the process took so long that I'm now looking at XFS again. In particular, this concerns me http://xfs.org/index.php/XFS_FAQ#Write_barrier_support. In the swift documentation, it's recommended to mount the filesystems w/ 'nobarrier', but it would seem to me that this would leave the data open to corruption in the case of a crash. AFAIK, swift doesn't do checksumming (and checksum checking) of stored data (after it is written), which would mean that any data corruption would silently get passed back to the users. Now, I haven't had operational experience running XFS in production, I've mainly used ZFS, JFS, and ext{3,4}. Are there any recommendations for using XFS safely in production? _______________________________________________ Openstack-operators mailing list Openstack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ Openstack-operators mailing list Openstack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ________________________________ Sophos Limited, The Pentagon, Abingdon Science Park, Abingdon, OX14 3YP, United Kingdom. Company Reg No 2096520. VAT Reg No GB 991 2418 08. -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.O'Loughlin at surrey.ac.uk Fri Oct 14 14:13:24 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Fri, 14 Oct 2011 15:13:24 +0100 Subject: [Openstack-operators] configuring the scheduler Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C7A@EXMB01CMS.surrey.ac.uk> Hi, I'm running Diablo and looking for advice on configuring the scheduler. I have compute nodes of differing capabilities and would like to be able to describe that to the scheduler so it can make decisions based on that information. Does anybody know how to do something like this? Kind Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From J.O'Loughlin at surrey.ac.uk Fri Oct 14 14:16:35 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Fri, 14 Oct 2011 15:16:35 +0100 Subject: [Openstack-operators] second availability zone Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C7B@EXMB01CMS.surrey.ac.uk> Hi, is it possible to set up a second availability zone on diablo? If anybody knows how to do this would be very interested in hearing from them. Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From sergey at kulanov.org.ua Fri Oct 14 18:57:53 2011 From: sergey at kulanov.org.ua (Sergey Kulanov) Date: Fri, 14 Oct 2011 21:57:53 +0300 Subject: [Openstack-operators] Problem running Openstack on fedora 16 (nova segfault) Message-ID: <4E988631.10905@kulanov.org.ua> Hi, I have the following troubles while using Openstack on Fedora 16: SOFTWARE (what do we have): - Fedora 16 Beta - Linux server1.example.com 3.1.0-0.rc9.git0.0.fc16.i686.PAE #1 SMP Wed Oct 5 15:51:55 UTC 2011 i686 i686 i386 GNU/Linux - openstack-swift-auth-1.4.0-2.fc16.noarch openstack-glance-2011.3-1.fc16.noarch openstack-swift-1.4.0-2.fc16.noarch openstack-swift-proxy-1.4.0-2.fc16.noarch openstack-swift-account-1.4.0-2.fc16.noarch openstack-nova-2011.3-3.fc16.noarch openstack-swift-object-1.4.0-2.fc16.noarch openstack-swift-container-1.4.0-2.fc16.noarch - glibc-2.14.90-11 -python-2.7.2-4.fc16.i686 I tried to follow this instruction http://fedoraproject.org/wiki/Getting_started_with_OpenStack_Nova INSTALLATION: 1) Some installation warnings: Downloading Packages: (1/5): openstack-swift-account-1.4.0-2.fc16.noarch.rpm | 26 kB 00:00 (2/5): openstack-swift-auth-1.4.0-2.fc16.noarch.rpm | 9.9 kB 00:00 (3/5): openstack-swift-container-1.4.0-2.fc16.noarch.rpm | 26 kB 00:00 (4/5): openstack-swift-object-1.4.0-2.fc16.noarch.rpm | 44 kB 00:00 (5/5): openstack-swift-proxy-1.4.0-2.fc16.noarch.rpm | 37 kB 00:00 ---------------------------------------------------------------------------------------------------------------------------------------- Total 137 kB/s | 143 kB 00:01 Running Transaction Check Running Transaction Test Transaction Test Succeeded Running Transaction Installing : openstack-swift-object-1.4.0-2.fc16.noarch 1/5 Non-fatal POSTIN scriptlet failure in rpm package openstack-swift-object-1.4.0-2.fc16.noarch error reading information on service swift-object: No such file or directory warning: %post(openstack-swift-object-1.4.0-2.fc16.noarch) scriptlet failed, exit status 1 Installing : openstack-swift-proxy-1.4.0-2.fc16.noarch 2/5 Non-fatal POSTIN scriptlet failure in rpm package openstack-swift-proxy-1.4.0-2.fc16.noarch error reading information on service swift-proxy: No such file or directory warning: %post(openstack-swift-proxy-1.4.0-2.fc16.noarch) scriptlet failed, exit status 1 Installing : openstack-swift-auth-1.4.0-2.fc16.noarch 3/5 Non-fatal POSTIN scriptlet failure in rpm package openstack-swift-auth-1.4.0-2.fc16.noarch error reading information on service swift-auth: No such file or directory warning: %post(openstack-swift-auth-1.4.0-2.fc16.noarch) scriptlet failed, exit status 1 Installing : openstack-swift-account-1.4.0-2.fc16.noarch 4/5 Non-fatal POSTIN scriptlet failure in rpm package openstack-swift-account-1.4.0-2.fc16.noarch error reading information on service swift-account: No such file or directory warning: %post(openstack-swift-account-1.4.0-2.fc16.noarch) scriptlet failed, exit status 1 Installing : openstack-swift-container-1.4.0-2.fc16.noarch 5/5 Non-fatal POSTIN scriptlet failure in rpm package openstack-swift-container-1.4.0-2.fc16.noarch error reading information on service swift-container: No such file or directory warning: %post(openstack-swift-container-1.4.0-2.fc16.noarch) scriptlet failed, exit status 1 Installed: openstack-swift-account.noarch 0:1.4.0-2.fc16 openstack-swift-auth.noarch 0:1.4.0-2.fc16 openstack-swift-container.noarch 0:1.4.0-2.fc16 openstack-swift-object.noarch 0:1.4.0-2.fc16 openstack-swift-proxy.noarch 0:1.4.0-2.fc16 Complete! 2) RUNNING: I didn't change any default setting (just add debugging flag) [root at server1 ~]# service openstack-glance-api start; service openstack-glance-registry start /var/log/messages Oct 14 21:20:31 server1 glance-api[1404]: Traceback (most recent call last): Oct 14 21:20:31 server1 glance-api[1404]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 336, in fire_timers Oct 14 21:20:31 server1 glance-api[1404]: timer() Oct 14 21:20:31 server1 glance-api[1404]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 56, in __call__ Oct 14 21:20:31 server1 glance-api[1404]: cb(*args, **kw) Oct 14 21:20:31 server1 glance-api[1404]: SystemError: error return without exception set Oct 14 21:25:37 server1 glance-registry[1460]: Traceback (most recent call last): Oct 14 21:25:37 server1 glance-registry[1460]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 336, in fire_timers Oct 14 21:25:37 server1 glance-registry[1460]: timer() Oct 14 21:25:37 server1 glance-registry[1460]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 56, in __call__ Oct 14 21:25:37 server1 glance-registry[1460]: cb(*args, **kw) Oct 14 21:25:37 server1 glance-registry[1460]: SystemError: error return without exception set [root at server1 ~]# service openstack-glance-api status;service openstack-glance-registry status Redirecting to /bin/systemctl status openstack-glance-api.service Loaded: loaded (/lib/systemd/system/openstack-glance-api.service; disabled) Active: active (running) since Fri, 14 Oct 2011 21:20:28 +0300; 6min ago Main PID: 1404 (glance-api) CGroup: name=systemd:/system/openstack-glance-api.service ? 1404 /usr/bin/python /usr/bin/glance-api --config-file /etc/glance/glance-api.conf openstack-glance-registry.service - OpenStack Image Service (code-named Glance) Registry server Loaded: loaded (/lib/systemd/system/openstack-glance-registry.service; disabled) Active: active (running) since Fri, 14 Oct 2011 21:25:37 +0300; 1min 29s ago Main PID: 1460 (glance-registry) CGroup: name=systemd:/system/openstack-glance-registry.service ? 1460 /usr/bin/python /usr/bin/glance-registry --config-file /etc/glance/glance-registry.conf ---------------------------NOVA START ----------------------- service openstack-nova-api start /var/log/messages Oct 14 21:30:55 server1 kernel: [ 1186.188224] nova-api[1560]: segfault at 4 ip 0025a950 sp bfef9108 error 4 in libc-2.14.90.so[115000+1a7000] Oct 14 21:30:55 server1 systemd[1]: openstack-nova-api.service: main process exited, code=killed, status=11 Oct 14 21:30:55 server1 systemd[1]: Unit openstack-nova-api.service entered failed state. /var/log/nova/api.log 2011-10-14 21:30:54,785 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(created_at, Column) 2011-10-14 21:30:54,804 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(updated_at, Column) 2011-10-14 21:30:54,805 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(deleted_at, Column) 2011-10-14 21:30:54,805 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(deleted, Column) 2011-10-14 21:30:54,805 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(id, Column) 2011-10-14 21:30:54,805 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(name, Column) 2011-10-14 21:30:54,805 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(disk_format, Column) 2011-10-14 21:30:54,806 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(container_format, Column) 2011-10-14 21:30:54,806 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(size, Column) 2011-10-14 21:30:54,807 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(status, Column) 2011-10-14 21:30:54,807 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(is_public, Column) 2011-10-14 21:30:54,807 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(location, Column) 2011-10-14 21:30:54,807 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(checksum, Column) 2011-10-14 21:30:54,807 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(min_disk, Column) 2011-10-14 21:30:54,808 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(min_ram, Column) 2011-10-14 21:30:54,808 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) _configure_property(owner, Column) 2011-10-14 21:30:54,808 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) Identified primary key columns: ColumnSet([Column('id', Integer(), table=, primary_key=True, nullable=False)]) 2011-10-14 21:30:54,809 INFO sqlalchemy.orm.mapper.Mapper [-] (Image|images) constructed 2011-10-14 21:30:54,811 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(image, RelationshipProperty) 2011-10-14 21:30:54,812 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(created_at, Column) 2011-10-14 21:30:54,812 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(updated_at, Column) 2011-10-14 21:30:54,812 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(deleted_at, Column) 2011-10-14 21:30:54,812 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(deleted, Column) 2011-10-14 21:30:54,813 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(id, Column) 2011-10-14 21:30:54,813 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(image_id, Column) 2011-10-14 21:30:54,813 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(name, Column) 2011-10-14 21:30:54,813 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) _configure_property(value, Column) 2011-10-14 21:30:54,814 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) Identified primary key columns: ColumnSet([Column('id', Integer(), table=, primary_key=True, nullable=False)]) 2011-10-14 21:30:54,814 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageProperty|image_properties) constructed 2011-10-14 21:30:54,816 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(image, RelationshipProperty) 2011-10-14 21:30:54,817 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(created_at, Column) 2011-10-14 21:30:54,817 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(updated_at, Column) 2011-10-14 21:30:54,817 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(deleted_at, Column) 2011-10-14 21:30:54,817 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(deleted, Column) 2011-10-14 21:30:54,818 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(id, Column) 2011-10-14 21:30:54,818 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(image_id, Column) 2011-10-14 21:30:54,818 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(member, Column) 2011-10-14 21:30:54,818 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) _configure_property(can_share, Column) 2011-10-14 21:30:54,819 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) Identified primary key columns: ColumnSet([Column('id', Integer(), table=, primary_key=True, nullable=False)]) 2011-10-14 21:30:54,819 INFO sqlalchemy.orm.mapper.Mapper [-] (ImageMember|image_members) constructed 2011-10-14 21:30:55,188 DEBUG nova.utils [-] Running sh /usr/lib/python2.7/site-packages/nova/api/ec2/../../CA/genrootca.sh from (pid=1560) runthis /usr/lib/python2.7/site-packages/nova/utils.py:275 2011-10-14 21:30:55,188 DEBUG nova.utils [-] Running cmd (subprocess): sh /usr/lib/python2.7/site-packages/nova/api/ec2/../../CA/genrootca.sh from (pid=1560) execute /usr/lib/python2.7/site-packages/nova/utils.py:165 I didn't find any solution with segfault at 4 ip 0025a950 sp bfef9108 error 4 in libc-2.14.90.so[115000+1a7000] Thank you ------------ Kind regards, Sergey From Kevin.Fox at pnnl.gov Fri Oct 14 20:45:06 2011 From: Kevin.Fox at pnnl.gov (Kevin Fox) Date: Fri, 14 Oct 2011 13:45:06 -0700 Subject: [Openstack-operators] Nova/libvirt Message-ID: <1318625106.30457.5929.camel@sledge.emsl.pnl.gov> Quick question. How strongly does Nova assume it manages everything in libvirt? I'm curious if I wanted to fire up a few virtual machines manually using libvirt on a nova managed host if it would confuse Nova. I'm ok with Nova not knowing about them. I'm wondering if Nova will assume it has more resources then are available and flip out. Thanks, Kevin From markmc at redhat.com Mon Oct 17 10:43:30 2011 From: markmc at redhat.com (Mark McLoughlin) Date: Mon, 17 Oct 2011 11:43:30 +0100 Subject: [Openstack-operators] Problem running Openstack on fedora 16 (nova segfault) In-Reply-To: <4E988631.10905@kulanov.org.ua> References: <4E988631.10905@kulanov.org.ua> Message-ID: <1318848212.2048.8.camel@sorcha> Hi Sergey, On Fri, 2011-10-14 at 21:57 +0300, Sergey Kulanov wrote: > Installing : > openstack-swift-object-1.4.0-2.fc16.noarch > 1/5 > Non-fatal POSTIN scriptlet failure in rpm package > openstack-swift-object-1.4.0-2.fc16.noarch > error reading information on service swift-object: No such file or directory > warning: %post(openstack-swift-object-1.4.0-2.fc16.noarch) scriptlet > failed, exit status 1 It looks like this problem was reported sometime ago and a patch is waiting to be applied: https://bugzilla.redhat.com/685155 Silas, David - can one of you take care of this or should I? Cheers, Mark. From sacampa at gmv.com Mon Oct 17 10:55:02 2011 From: sacampa at gmv.com (Sergio Ariel de la Campa Saiz) Date: Mon, 17 Oct 2011 12:55:02 +0200 Subject: [Openstack-operators] Storage tape and Swift Message-ID: <947E2550A3F9C740936DDCC9936667B901BE41AEC30B@GMVMAIL4.gmv.es> Hi... Can somebody tell me how to add some storage tapes to a Swift cluster? The main problem is that I have data loaded in these tapes, so I can?t erase them. Thanks... [cid:image002.png at 01CC8CCB.FC71ADE0] [cid:image003.gif at 01CC8CCB.DB36E780] Sergio Ariel de la Campa Saiz Ingeniero de Infraestructuras / Infrastucture Engineer / GMV Isaac Newton, 11 P.T.M. Tres Cantos E-28760 Madrid Tel. +34 91 807 21 00 Fax +34 91 807 21 99 www.gmv.com [cid:image004.gif at 01CC8CCB.DB36E780] [cid:image005.gif at 01CC8CCB.DB36E780] [cid:image006.gif at 01CC8CCB.DB36E780] [cid:image007.gif at 01CC8CCB.DB36E780] ______________________ This message including any attachments may contain confidential information, according to our Information Security Management System, and intended solely for a specific individual to whom they are addressed. Any unauthorised copy, disclosure or distribution of this message is strictly forbidden. If you have received this transmission in error, please notify the sender immediately and delete it. ______________________ Este mensaje, y en su caso, cualquier fichero anexo al mismo, puede contener informacion clasificada por su emisor como confidencial en el marco de su Sistema de Gestion de Seguridad de la Informacion siendo para uso exclusivo del destinatario, quedando prohibida su divulgacion copia o distribucion a terceros sin la autorizacion expresa del remitente. Si Vd. ha recibido este mensaje erroneamente, se ruega lo notifique al remitente y proceda a su borrado. Gracias por su colaboracion. ______________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 5711 bytes Desc: image003.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.gif Type: image/gif Size: 1306 bytes Desc: image004.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.gif Type: image/gif Size: 1309 bytes Desc: image005.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.gif Type: image/gif Size: 1279 bytes Desc: image006.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.gif Type: image/gif Size: 1323 bytes Desc: image007.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 222 bytes Desc: image002.png URL: From markmc at redhat.com Mon Oct 17 11:02:54 2011 From: markmc at redhat.com (Mark McLoughlin) Date: Mon, 17 Oct 2011 12:02:54 +0100 Subject: [Openstack-operators] Problem running Openstack on fedora 16 (nova segfault) In-Reply-To: <4E988631.10905@kulanov.org.ua> References: <4E988631.10905@kulanov.org.ua> Message-ID: <1318849375.2048.11.camel@sorcha> On Fri, 2011-10-14 at 21:57 +0300, Sergey Kulanov wrote: > ---------------------------NOVA START ----------------------- > service openstack-nova-api start > > /var/log/messages > Oct 14 21:30:55 server1 kernel: [ 1186.188224] nova-api[1560]: segfault > at 4 ip 0025a950 sp bfef9108 error 4 in libc-2.14.90.so[115000+1a7000] > Oct 14 21:30:55 server1 systemd[1]: openstack-nova-api.service: main > process exited, code=killed, status=11 > Oct 14 21:30:55 server1 systemd[1]: Unit openstack-nova-api.service > entered failed state. So, there are some known problems with 2.14.90-11 https://admin.fedoraproject.org/updates/FEDORA-2011-14175 Could you try: $> yum downgrade glibc? Hopefully that will get you 2.14.90-10 Thanks, Mark. From sergey at kulanov.org.ua Mon Oct 17 16:15:36 2011 From: sergey at kulanov.org.ua (Sergey Kulanov) Date: Mon, 17 Oct 2011 19:15:36 +0300 Subject: [Openstack-operators] Problem running Openstack on fedora 16 (nova segfault) In-Reply-To: <1318849375.2048.11.camel@sorcha> References: <4E988631.10905@kulanov.org.ua> <1318849375.2048.11.camel@sorcha> Message-ID: <4E9C54A8.9070807@kulanov.org.ua> 17.10.2011 14:02, Mark McLoughlin ?????: > On Fri, 2011-10-14 at 21:57 +0300, Sergey Kulanov wrote: > >> ---------------------------NOVA START ----------------------- >> service openstack-nova-api start >> >> /var/log/messages >> Oct 14 21:30:55 server1 kernel: [ 1186.188224] nova-api[1560]: segfault >> at 4 ip 0025a950 sp bfef9108 error 4 in libc-2.14.90.so[115000+1a7000] >> Oct 14 21:30:55 server1 systemd[1]: openstack-nova-api.service: main >> process exited, code=killed, status=11 >> Oct 14 21:30:55 server1 systemd[1]: Unit openstack-nova-api.service >> entered failed state. > So, there are some known problems with 2.14.90-11 > > https://admin.fedoraproject.org/updates/FEDORA-2011-14175 > > Could you try: > > $> yum downgrade glibc? > > Hopefully that will get you 2.14.90-10 > > Thanks, > Mark. > > Hi, Thanks for the replay Actually I tried different glibc versions end even installing openstack on fedora 15, I had the same problem. $> yum downgrade glibc glibc-common $> service openstack-glance-api start $> service openstack-glance-registry start works fine, everything starts ok but with some warnings: Oct 17 18:58:35 server1 yum[2795]: Installed: glibc-2.14.90-10.i686 Oct 17 18:58:45 server1 yum[2795]: Installed: glibc-common-2.14.90-10.i686 Oct 17 18:59:05 server1 glance-api[2828]: Traceback (most recent call last): Oct 17 18:59:05 server1 glance-api[2828]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 336, in fire_timers Oct 17 18:59:05 server1 glance-api[2828]: timer() Oct 17 18:59:05 server1 glance-api[2828]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 56, in __call__ Oct 17 18:59:05 server1 glance-api[2828]: cb(*args, **kw) Oct 17 18:59:05 server1 glance-api[2828]: SystemError: error return without exception set Oct 17 19:01:01 server1 systemd-logind[669]: New session 5 of user root. Oct 17 19:01:01 server1 systemd-logind[669]: Removed session 5. Oct 17 19:02:57 server1 glance-registry[2888]: Traceback (most recent call last): Oct 17 19:02:57 server1 glance-registry[2888]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 336, in fire_timers Oct 17 19:02:57 server1 glance-registry[2888]: timer() Oct 17 19:02:57 server1 glance-registry[2888]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 56, in __call__ Oct 17 19:02:57 server1 glance-registry[2888]: cb(*args, **kw) Oct 17 19:02:57 server1 glance-registry[2888]: SystemError: error return without exception set Now try to start nova: $> [root at server1 ~]# service openstack-nova-api start Redirecting to /bin/systemctl start openstack-nova-api.service Oct 17 19:05:10 server1 kernel: [ 8053.797011] nova-api[2919]: segfault at 4 ip 003eefc0 sp bff3c9b8 error 4 in libc-2.14.90.so[2aa000+1a6000] Oct 17 19:05:10 server1 systemd[1]: openstack-nova-api.service: main process exited, code=killed, status=11 Oct 17 19:05:10 server1 systemd[1]: Unit openstack-nova-api.service entered failed state. $> [root at server1 ~]# service openstack-nova-volume status Redirecting to /bin/systemctl status openstack-nova-volume.service openstack-nova-volume.service - OpenStack Nova Volume Server Loaded: loaded (/lib/systemd/system/openstack-nova-volume.service; disabled) Active: active (running) since Mon, 17 Oct 2011 19:08:40 +0300; 15s ago Main PID: 3052 (nova-volume) CGroup: name=systemd:/system/openstack-nova-volume.service ? 3052 /usr/bin/python /usr/bin/nova-volume --flagfile /etc/nova/nova.conf --logfile /var/log/nova/volume.log Only nova-volume starts, the rest services have segfault: Oct 17 19:07:55 server1 kernel: [ 8218.917047] nova-compute[2978]: segfault at bf856000 ip 00255d19 sp bf8538d8 error 6 in libc-2.14.90.so[110000+1a6000] Oct 17 19:07:55 server1 systemd[1]: openstack-nova-compute.service: main process exited, code=killed, status=11 Oct 17 19:07:55 server1 systemd[1]: Unit openstack-nova-compute.service entered failed state. Oct 17 19:08:25 server1 kernel: [ 8248.830936] nova-network[3036]: segfault at bfbaf000 ip 00f3ad3b sp bfbabae8 error 6 in libc-2.14.90.so[df5000+1a6000] Oct 17 19:08:25 server1 systemd[1]: openstack-nova-network.service: main process exited, code=killed, status=11 Oct 17 19:08:25 server1 systemd[1]: Unit openstack-nova-network.service entered failed state. Oct 17 19:08:40 server1 nova-volume[3052]: Traceback (most recent call last): Oct 17 19:08:40 server1 nova-volume[3052]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 336, in fire_timers Oct 17 19:08:40 server1 nova-volume[3052]: timer() Oct 17 19:08:40 server1 nova-volume[3052]: File "/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 56, in __call__ Oct 17 19:08:40 server1 nova-volume[3052]: cb(*args, **kw) Oct 17 19:08:40 server1 nova-volume[3052]: SystemError: error return without exception set Oct 17 19:10:32 server1 kernel: [ 8375.977931] nova-scheduler[3091]: segfault at bfdf0000 ip 00b3bdcc sp bfdecc50 error 6 in libc-2.14.90.so[9f6000+1a6000] Oct 17 19:10:32 server1 systemd[1]: openstack-nova-scheduler.service: main process exited, code=killed, status=11 Oct 17 19:10:32 server1 systemd[1]: Unit openstack-nova-scheduler.service entered failed state. By the way, the same happens with glibc-2.14.90-12.i686 Thanks, Sergey From Till.Mossakowski at dfki.de Wed Oct 19 16:04:04 2011 From: Till.Mossakowski at dfki.de (Till Mossakowski) Date: Wed, 19 Oct 2011 18:04:04 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long Message-ID: <4E9EF4F4.2080406@dfki.de> Hi, I have set up openstack using stackops. I have installed one controller node and one compute node (each using two 1GBit NICs), following the book "Deploying Openstack". I am using nova-objectstore for storing images. Now starting a machine with a 5G image takes quite a while, probably because the image is mounted via nfs to the compute node. With libvirt, I am used to start VMs instantly. Is there a way to do the same with openstack? The image would need to be stored directly on the compute node, of course. Ideally, in a network with more nodes, the lots of images I have would be distributed to compute nodes in advance, and a special scheduler would select a compute node holding the needed image. Is this possible with openstack? Best, Till -- Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 DFKI GmbH Bremen Fax +49-421-218-9864226 Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH principal office, *not* the address for mail etc.!!!: Trippstadter Str. 122, D-67663 Kaiserslautern management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff supervisory board: Prof. Hans A. Aukes (chair) Amtsgericht Kaiserslautern, HRB 2313 From diego.parrilla at stackops.com Wed Oct 19 16:39:36 2011 From: diego.parrilla at stackops.com (Diego Parrilla) Date: Wed, 19 Oct 2011 18:39:36 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: <4E9EF4F4.2080406@dfki.de> References: <4E9EF4F4.2080406@dfki.de> Message-ID: Hi, my answers below, On Wed, Oct 19, 2011 at 6:04 PM, Till Mossakowski wrote: > Hi, > > I have set up openstack using stackops. > Good choice ;-) > I have installed one controller node and one compute node (each using two > 1GBit NICs), following the book "Deploying Openstack". > I am using nova-objectstore for storing images. > > Now starting a machine with a 5G image takes quite a while, probably > because the image is mounted via nfs to the compute node. > 5GB image it's not too big... we use NFS to share instances among nodes to help with the live migration and performance it's acceptable. How much is 'quite a while' in seconds? > > With libvirt, I am used to start VMs instantly. Is there a way to do the > same with openstack? The image would need to be stored directly on the > compute node, of course. Ideally, in a network with more nodes, the lots of > images I have would be distributed to compute nodes in advance, and a > special scheduler would select a compute node holding the needed image. Is > this possible with openstack? > If you share the /var/lib/nova/instances with NFS, during the 'launch' process the base virtual image is copied to '_base'. Depending on the size of this file it will take longer. Once it's copied next time you use this image it should go much faster. Note: I have tested right now with a 1Gb launching a >25GB Windows VM and it took 3-4 minutes the first time. New Windows images, it took only a few seconds. > > Best, > Till > > -- > Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 > DFKI GmbH Bremen Fax +49-421-218-9864226 > Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de > Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till > > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > principal office, *not* the address for mail etc.!!!: > Trippstadter Str. 122, D-67663 Kaiserslautern > management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff > supervisory board: Prof. Hans A. Aukes (chair) > Amtsgericht Kaiserslautern, HRB 2313 > ______________________________**_________________ > Openstack-operators mailing list > Openstack-operators at lists.**openstack.org > http://lists.openstack.org/**cgi-bin/mailman/listinfo/** > openstack-operators > -- Diego Parrilla *CEO* *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * * * ******************** ADVERTENCIA LEGAL ******************** Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. ***************** PRIVILEGED AND CONFIDENTIAL **************** We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Till.Mossakowski at dfki.de Wed Oct 19 18:12:08 2011 From: Till.Mossakowski at dfki.de (Till Mossakowski) Date: Wed, 19 Oct 2011 20:12:08 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: References: <4E9EF4F4.2080406@dfki.de> Message-ID: <4E9F12F8.30207@dfki.de> Hi, > my answers below, many thanks for your quick answer. > I have set up openstack using stackops. > > > Good choice ;-) Yes, the stackops GUI is very nice. However, stackops is based on cactus, right? Is there a way of using diablo with stackops? Perhaps it is possible to upgrade the Ubuntu lucid distro that is coming with stackops to natty or oneiric and then upgrade to diablo using the source ppa:openstack-release/2011.3 for openstack? > 5GB image it's not too big... we use NFS to share instances among nodes > to help with the live migration and performance it's acceptable. How > much is 'quite a while' in seconds? between half a minute and a minute (I haven't taken the exact time...). This is too long for our users. > If you share the /var/lib/nova/instances with NFS, during the 'launch' > process the base virtual image is copied to '_base'. Depending on the > size of this file it will take longer. Once it's copied next time you > use this image it should go much faster. > > Note: I have tested right now with a 1Gb launching a >25GB Windows VM > and it took 3-4 minutes the first time. New Windows images, it took only > a few seconds. This is interesting. Is there a way of telling the scheduler to prefer a compute node that already has copied the needed image? Best, Till -- Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 DFKI GmbH Bremen Fax +49-421-218-9864226 Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH principal office, *not* the address for mail etc.!!!: Trippstadter Str. 122, D-67663 Kaiserslautern management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff supervisory board: Prof. Hans A. Aukes (chair) Amtsgericht Kaiserslautern, HRB 2313 From diego.parrilla at stackops.com Thu Oct 20 08:53:40 2011 From: diego.parrilla at stackops.com (Diego Parrilla) Date: Thu, 20 Oct 2011 10:53:40 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: <4E9F12F8.30207@dfki.de> References: <4E9EF4F4.2080406@dfki.de> <4E9F12F8.30207@dfki.de> Message-ID: Hi, my answers below. On Wed, Oct 19, 2011 at 8:12 PM, Till Mossakowski wrote: > Hi, > > my answers below, >> > > many thanks for your quick answer. > > > I have set up openstack using stackops. >> >> >> Good choice ;-) >> > > Yes, the stackops GUI is very nice. However, stackops is based on cactus, > right? Is there a way of using diablo with stackops? Perhaps it is possible > to upgrade the Ubuntu lucid distro that is coming with stackops to natty or > oneiric and then upgrade to diablo using the source > ppa:openstack-release/2011.3 for openstack? Yes, the 0.3 version with Diablo release is coming. We detected some QA issues. But things are working much better now. > > > 5GB image it's not too big... we use NFS to share instances among nodes >> to help with the live migration and performance it's acceptable. How >> much is 'quite a while' in seconds? >> > > between half a minute and a minute (I haven't taken the exact time...). > This is too long for our users. If the virtual disks are cached, launching a 40GB virtual machine takes less than 5 seconds in our test platform (IBM x3550M3 Dual Xeon 5620 64GB with NFS as shared storage on 1Gb) > > > If you share the /var/lib/nova/instances with NFS, during the 'launch' >> process the base virtual image is copied to '_base'. Depending on the >> size of this file it will take longer. Once it's copied next time you >> use this image it should go much faster. >> >> Note: I have tested right now with a 1Gb launching a >25GB Windows VM >> and it took 3-4 minutes the first time. New Windows images, it took only >> a few seconds. >> > > This is interesting. Is there a way of telling the scheduler to prefer a > compute node that already has copied the needed image? Try this: 1) Configure the compute nodes to use a shared directory with NFS on /var/lib/nova/instances 2) Launch ALL the virtual disks you need at runtime. It will take a while the first time. 3) Virtual disks are now cached in /var/lib/nova/instances/_base 4) Try to launch now the virtual disks again. They should start very fast. If you need some kind of assistance, please let me know. Regards Diego > > > Best, Till > > -- > Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 > DFKI GmbH Bremen Fax +49-421-218-9864226 > Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de > Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till > > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > principal office, *not* the address for mail etc.!!!: > Trippstadter Str. 122, D-67663 Kaiserslautern > management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff > supervisory board: Prof. Hans A. Aukes (chair) > Amtsgericht Kaiserslautern, HRB 2313 > -- Diego Parrilla *CEO* *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * * * ******************** ADVERTENCIA LEGAL ******************** Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. ***************** PRIVILEGED AND CONFIDENTIAL **************** We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris-michel.deschenes at ubisoft.com Thu Oct 20 13:47:02 2011 From: boris-michel.deschenes at ubisoft.com (Boris-Michel Deschenes) Date: Thu, 20 Oct 2011 09:47:02 -0400 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: References: <4E9EF4F4.2080406@dfki.de> <4E9F12F8.30207@dfki.de> Message-ID: Hi guys, Just a quick note, I had this setup at some point (NFS-mounted /var/lib/nova/instances) which is essential to get live VM migrations if I'm not mistaken (live migration was working perfectly). The problem I had with this setup was that the VM startup time was considerably slower than when the images were residing on a local disk (and I mean, even after all images are "cached"). Basically an image will start the fastest when it is cached locally (local drive) Then, not quite as fast when cached but on a NFS-mounted directory Then really slowly when residing entirely on another disk and needed to be written locally to be cached These are the observations I made but I realize other factors weigh in (SAS vs SATA disk, network speed, etc.) Please advise if you get the same speed in NFS-cached vs local-cached setup as it might convince me to go back to an NFS share (also were you using SAS disks to serve the NFS?). Thanks De : openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] De la part de Diego Parrilla Envoy? : 20 octobre 2011 04:54 ? : Till Mossakowski Cc : openstack-operators at lists.openstack.org Objet : Re: [Openstack-operators] Starting large VMs takes quite long Hi, my answers below. On Wed, Oct 19, 2011 at 8:12 PM, Till Mossakowski > wrote: Hi, my answers below, many thanks for your quick answer. I have set up openstack using stackops. Good choice ;-) Yes, the stackops GUI is very nice. However, stackops is based on cactus, right? Is there a way of using diablo with stackops? Perhaps it is possible to upgrade the Ubuntu lucid distro that is coming with stackops to natty or oneiric and then upgrade to diablo using the source ppa:openstack-release/2011.3 for openstack? Yes, the 0.3 version with Diablo release is coming. We detected some QA issues. But things are working much better now. 5GB image it's not too big... we use NFS to share instances among nodes to help with the live migration and performance it's acceptable. How much is 'quite a while' in seconds? between half a minute and a minute (I haven't taken the exact time...). This is too long for our users. If the virtual disks are cached, launching a 40GB virtual machine takes less than 5 seconds in our test platform (IBM x3550M3 Dual Xeon 5620 64GB with NFS as shared storage on 1Gb) If you share the /var/lib/nova/instances with NFS, during the 'launch' process the base virtual image is copied to '_base'. Depending on the size of this file it will take longer. Once it's copied next time you use this image it should go much faster. Note: I have tested right now with a 1Gb launching a >25GB Windows VM and it took 3-4 minutes the first time. New Windows images, it took only a few seconds. This is interesting. Is there a way of telling the scheduler to prefer a compute node that already has copied the needed image? Try this: 1) Configure the compute nodes to use a shared directory with NFS on /var/lib/nova/instances 2) Launch ALL the virtual disks you need at runtime. It will take a while the first time. 3) Virtual disks are now cached in /var/lib/nova/instances/_base 4) Try to launch now the virtual disks again. They should start very fast. If you need some kind of assistance, please let me know. Regards Diego Best, Till -- Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 DFKI GmbH Bremen Fax +49-421-218-9864226 Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH principal office, *not* the address for mail etc.!!!: Trippstadter Str. 122, D-67663 Kaiserslautern management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff supervisory board: Prof. Hans A. Aukes (chair) Amtsgericht Kaiserslautern, HRB 2313 -- Diego Parrilla CEO www.stackops.com | diego.parrilla at stackops.com | +34 649 94 43 29 | skype:diegoparrilla [cid:~WRD000.jpg] ******************** ADVERTENCIA LEGAL ******************** Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. ***************** PRIVILEGED AND CONFIDENTIAL **************** We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ~WRD000.jpg Type: image/jpeg Size: 823 bytes Desc: ~WRD000.jpg URL: From diego.parrilla at stackops.com Thu Oct 20 14:03:31 2011 From: diego.parrilla at stackops.com (Diego Parrilla) Date: Thu, 20 Oct 2011 16:03:31 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: References: <4E9EF4F4.2080406@dfki.de> <4E9F12F8.30207@dfki.de> Message-ID: Hi, my answers below, On Thu, Oct 20, 2011 at 3:47 PM, Boris-Michel Deschenes < boris-michel.deschenes at ubisoft.com> wrote: > Hi guys,**** > > ** ** > > Just a quick note, I had this setup at some point (NFS-mounted > /var/lib/nova/instances) which is essential to get live VM migrations if I?m > not mistaken (live migration was working perfectly). The problem I had with > this setup was that the VM startup time was considerably slower than when > the images were residing on a local disk (and I mean, even after all images > are ?cached?). > It's true. The fastest disk and the closest to the drive the better. > **** > > ** ** > > Basically an image will start the fastest when it is cached locally (local > drive) > Correct. > **** > > Then, not quite as fast when cached but on a NFS-mounted directory > Correct. It takes some time to create the local disks. It's very important to have a good connection to the shared file system (it's not mandatory to use NFS). > **** > > Then really slowly when residing entirely on another disk and needed to be > written locally to be cached. > Right, it can take several minutes on a 1Gb. > **** > > ** ** > > These are the observations I made but I realize other factors weigh in (SAS > vs SATA disk, network speed, etc.) Please advise if you get the same speed > in NFS-cached vs local-cached setup as it might convince me to go back to an > NFS share (also were you using SAS disks to serve the NFS?). > No, the performance on local disk is much higher than running a NFS on a 1Gb. For my perspective not only live migration is a must for our customers, but also the local virtual disks must persists a catastrophic failure of a nova-compute. That's the reason why recommend 10Gb and a good performant NFS file server connected. 15K or 10K SAS is not so relevant, the bottleneck is the network (speed and latency). There are also good solutions combining 10Gb + SSD Cache disks + 7.2KRPM SAS/SATA disks. I would like to know what the people are using in real life deployments. Any more thoughts? Regards Diego > **** > > ** ** > > Thanks**** > > ** ** > > *De :* openstack-operators-bounces at lists.openstack.org [mailto: > openstack-operators-bounces at lists.openstack.org] *De la part de* Diego > Parrilla > *Envoy? :* 20 octobre 2011 04:54 > *? :* Till Mossakowski > *Cc :* openstack-operators at lists.openstack.org > *Objet :* Re: [Openstack-operators] Starting large VMs takes quite long*** > * > > ** ** > > Hi, **** > > ** ** > > my answers below.**** > > ** ** > > On Wed, Oct 19, 2011 at 8:12 PM, Till Mossakowski < > Till.Mossakowski at dfki.de> wrote:**** > > Hi,**** > > my answers below,**** > > > many thanks for your quick answer.**** > > ** ** > > I have set up openstack using stackops. > > > Good choice ;-)**** > > ** ** > > Yes, the stackops GUI is very nice. However, stackops is based on cactus, > right? Is there a way of using diablo with stackops? Perhaps it is possible > to upgrade the Ubuntu lucid distro that is coming with stackops to natty or > oneiric and then upgrade to diablo using the source > ppa:openstack-release/2011.3 for openstack?**** > > ** ** > > Yes, the 0.3 version with Diablo release is coming. We detected some QA > issues. But things are working much better now.**** > > **** > > ** ** > > 5GB image it's not too big... we use NFS to share instances among nodes > to help with the live migration and performance it's acceptable. How > much is 'quite a while' in seconds?**** > > ** ** > > between half a minute and a minute (I haven't taken the exact time...). > This is too long for our users.**** > > ** ** > > If the virtual disks are cached, launching a 40GB virtual machine takes > less than 5 seconds in our test platform (IBM x3550M3 Dual Xeon 5620 64GB > with NFS as shared storage on 1Gb)**** > > **** > > ** ** > > If you share the /var/lib/nova/instances with NFS, during the 'launch' > process the base virtual image is copied to '_base'. Depending on the > size of this file it will take longer. Once it's copied next time you > use this image it should go much faster. > > Note: I have tested right now with a 1Gb launching a >25GB Windows VM > and it took 3-4 minutes the first time. New Windows images, it took only > a few seconds.**** > > ** ** > > This is interesting. Is there a way of telling the scheduler to prefer a > compute node that already has copied the needed image?**** > > ** ** > > Try this:**** > > ** ** > > 1) Configure the compute nodes to use a shared directory with NFS on > /var/lib/nova/instances**** > > 2) Launch ALL the virtual disks you need at runtime. It will take a while > the first time.**** > > 3) Virtual disks are now cached in /var/lib/nova/instances/_base**** > > 4) Try to launch now the virtual disks again. They should start very fast. > **** > > ** ** > > If you need some kind of assistance, please let me know.**** > > ** ** > > Regards**** > > Diego**** > > **** > > > > Best, Till > > -- > Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 > DFKI GmbH Bremen Fax +49-421-218-9864226 > Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de > Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till > > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > principal office, *not* the address for mail etc.!!!: > Trippstadter Str. 122, D-67663 Kaiserslautern > management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff > supervisory board: Prof. Hans A. Aukes (chair) > Amtsgericht Kaiserslautern, HRB 2313**** > > ** ** > > > -- **** > > Diego Parrilla > *CEO* > *www.stackops.com | * diego.parrilla at stackops.com | +34 649 94 43 29| skype:diegoparrilla > * > * **** > > *[image: Description : Image supprim?e par l'exp?diteur.]*** > > ******************** ADVERTENCIA LEGAL ******************** > Le informamos, como destinatario de este mensaje, que el correo electr?nico > y las comunicaciones por medio de Internet no permiten asegurar ni > garantizar la confidencialidad de los mensajes transmitidos, as? como > tampoco su integridad o su correcta recepci?n, por lo que STACKOPS > TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. > Si no consintiese en la utilizaci?n del correo electr?nico o de las > comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro > conocimiento de manera inmediata. Este mensaje va dirigido, de manera > exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al > secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso > de haber recibido este mensaje por error, le rogamos que, de forma > inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra > atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento > adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o > utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, > cualquiera que fuera su finalidad, est?n prohibidas por la ley. > > ***************** PRIVILEGED AND CONFIDENTIAL **************** > We hereby inform you, as addressee of this message, that e-mail and > Internet do not guarantee the confidentiality, nor the completeness or > proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. > does not assume any liability for those circumstances. Should you not agree > to the use of e-mail or to communications via Internet, you are kindly > requested to notify us immediately. This message is intended exclusively for > the person to whom it is addressed and contains privileged and confidential > information protected from disclosure by law. If you are not the addressee > indicated in this message, you should immediately delete it and any > attachments and notify the sender by reply e-mail. In such case, you are > hereby notified that any dissemination, distribution, copying or use of this > message or any attachments, for any purpose, is strictly prohibited by law. > **** > > ** ** > > ** ** > -- Diego Parrilla *CEO* *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 | skype:diegoparrilla* * * * ******************** ADVERTENCIA LEGAL ******************** Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. ***************** PRIVILEGED AND CONFIDENTIAL **************** We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.O'Loughlin at surrey.ac.uk Thu Oct 20 15:14:19 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Thu, 20 Oct 2011 16:14:19 +0100 Subject: [Openstack-operators] swift proxy server problem Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C92@EXMB01CMS.surrey.ac.uk> Hi All, I'm trying to set up swift and am having an issue with getting the proxy service to start, after a swift-init proxy start the proxy does not start and I see this in the logs: Oct 20 16:12:14 storage05 proxy-server UNCAUGHT EXCEPTION#012Traceback (most recent call last):#012 File "/usr/bin/swift-proxy-server", line 22, in #012 run_wsgi(conf_file, 'proxy-server', default_port=8080, **options)#012 File "/usr/lib/pymodules/python2.6/swift/common/wsgi.py", line 126, in run_wsgi#012 app = loadapp('config:%s' % conf_file, global_conf={'log_name': log_name})#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 204, in loadapp#012 return loadobj(APP, uri, name=name, **kw)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 224, in loadobj#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 278, in _loadconfig#012 return loader.get_context(object_type, name, global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 405, in get_context#012 global_additions=global_additions)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 503, in _pipeline_app_context#012 for name in pipeline[:-1]]#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 409, in get_context#012 section)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 431, in _context_from_use#012 object_type, name=use, global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 361, in get_context#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 285, in _loadegg#012 return loader.get_context(object_type, name, global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 561, in get_context#012 object_type, name=name)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 587, Any help appreciated. Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From deJongm at TEOCO.com Thu Oct 20 18:00:07 2011 From: deJongm at TEOCO.com (de Jong, Mark-Jan) Date: Thu, 20 Oct 2011 14:00:07 -0400 Subject: [Openstack-operators] nova-network assigned IP address Message-ID: <5E3DCAE61C95FA4397679425D7275D26055C296B3B@HQ-MX03.us.teo.earth> Hello, Is there a way to assign an IP address to nova-network other than the default gateway of the network? I want my guests to be directly connected to the "public" network and don't want nova-network to act as my router. I just need it for DHCP. Is this possible? Thanks! ,.,.,.,..,...,.,..,..,..,...,....,..,..,....,.... Mark-Jan de Jong O | 703-259-4406 C | 703-254-6284 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacampa at gmv.com Fri Oct 21 06:30:32 2011 From: sacampa at gmv.com (Sergio Ariel de la Campa Saiz) Date: Fri, 21 Oct 2011 08:30:32 +0200 Subject: [Openstack-operators] swift proxy server problem In-Reply-To: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C92@EXMB01CMS.surrey.ac.uk> References: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C92@EXMB01CMS.surrey.ac.uk> Message-ID: <947E2550A3F9C740936DDCC9936667B901BE41AEC50A@GMVMAIL4.gmv.es> I suggest you to send the configuration files. Sergio Ariel de la Campa Saiz Ingeniero de Infraestructuras / Infrastucture Engineer / GMV Isaac Newton, 11 P.T.M. Tres Cantos E-28760 Madrid Tel. +34 91 807 21 00 Fax +34 91 807 21 99 www.gmv.com ? ? ? -----Mensaje original----- De: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] En nombre de J.O'Loughlin at surrey.ac.uk Enviado el: jueves, 20 de octubre de 2011 17:14 Para: openstack-operators at lists.openstack.org Asunto: [Openstack-operators] swift proxy server problem Hi All, I'm trying to set up swift and am having an issue with getting the proxy service to start, after a swift-init proxy start the proxy does not start and I see this in the logs: Oct 20 16:12:14 storage05 proxy-server UNCAUGHT EXCEPTION#012Traceback (most recent call last):#012 File "/usr/bin/swift-proxy-server", line 22, in #012 run_wsgi(conf_file, 'proxy-server', default_port=8080, **options)#012 File "/usr/lib/pymodules/python2.6/swift/common/wsgi.py", line 126, in run_wsgi#012 app = loadapp('config:%s' % conf_file, global_conf={'log_name': log_name})#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 204, in loadapp#012 return loadobj(APP, uri, name=name, **kw)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 224, in loadobj#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 278, in _loadconfig#012 return loader.get_context(object_type, name, global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/l oadwsgi.py", line 405, in get_context#012 global_additions=global_additions)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 503, in _pipeline_app_context#012 for name in pipeline[:-1]]#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 409, in get_context#012 section)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 431, in _context_from_use#012 object_type, name=use, global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 361, in get_context#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 285, in _loadegg#012 return loader.get_context(object_type, name, global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 561, in get_context#012 object_type , name=name)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 587, Any help appreciated. Regards John O'Loughlin FEPS IT, Service Delivery Team Leader _______________________________________________ Openstack-operators mailing list Openstack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ______________________ This message including any attachments may contain confidential information, according to our Information Security Management System, and intended solely for a specific individual to whom they are addressed. Any unauthorised copy, disclosure or distribution of this message is strictly forbidden. If you have received this transmission in error, please notify the sender immediately and delete it. ______________________ Este mensaje, y en su caso, cualquier fichero anexo al mismo, puede contener informacion clasificada por su emisor como confidencial en el marco de su Sistema de Gestion de Seguridad de la Informacion siendo para uso exclusivo del destinatario, quedando prohibida su divulgacion copia o distribucion a terceros sin la autorizacion expresa del remitente. Si Vd. ha recibido este mensaje erroneamente, se ruega lo notifique al remitente y proceda a su borrado. Gracias por su colaboracion. ______________________ From ghe.rivero at gmail.com Fri Oct 21 12:17:56 2011 From: ghe.rivero at gmail.com (ghe. rivero) Date: Fri, 21 Oct 2011 14:17:56 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: References: <4E9EF4F4.2080406@dfki.de> <4E9F12F8.30207@dfki.de> Message-ID: Hi, Talking about live-migration and shared mount points, has anyone have the chance to try glusterfs connector? They claim to be able to: "Instantly boot VMs using a mountable filesystem interface ? no more fetching the entire VM image before booting" ( http://www.gluster.com/2011/07/27/glusters-shiny-new-connector-for-openstack/ ) See you! Ghe Rivero On Thu, Oct 20, 2011 at 4:03 PM, Diego Parrilla wrote: > Hi, > > my answers below, > > On Thu, Oct 20, 2011 at 3:47 PM, Boris-Michel Deschenes < > boris-michel.deschenes at ubisoft.com> wrote: > >> Hi guys,**** >> >> ** ** >> >> Just a quick note, I had this setup at some point (NFS-mounted >> /var/lib/nova/instances) which is essential to get live VM migrations if I?m >> not mistaken (live migration was working perfectly). The problem I had with >> this setup was that the VM startup time was considerably slower than when >> the images were residing on a local disk (and I mean, even after all images >> are ?cached?). >> > > It's true. The fastest disk and the closest to the drive the better. > > >> **** >> >> ** ** >> >> Basically an image will start the fastest when it is cached locally (local >> drive) >> > > Correct. > > >> **** >> >> Then, not quite as fast when cached but on a NFS-mounted directory >> > > Correct. It takes some time to create the local disks. It's very important > to have a good connection to the shared file system (it's not mandatory to > use NFS). > > >> **** >> >> Then really slowly when residing entirely on another disk and needed to be >> written locally to be cached. >> > > Right, it can take several minutes on a 1Gb. > > >> **** >> >> ** ** >> >> These are the observations I made but I realize other factors weigh in >> (SAS vs SATA disk, network speed, etc.) Please advise if you get the same >> speed in NFS-cached vs local-cached setup as it might convince me to go back >> to an NFS share (also were you using SAS disks to serve the NFS?). >> > > No, the performance on local disk is much higher than running a NFS on a > 1Gb. For my perspective not only live migration is a must for our customers, > but also the local virtual disks must persists a catastrophic failure of a > nova-compute. That's the reason why recommend 10Gb and a good performant NFS > file server connected. 15K or 10K SAS is not so relevant, the bottleneck is > the network (speed and latency). There are also good solutions combining > 10Gb + SSD Cache disks + 7.2KRPM SAS/SATA disks. > > I would like to know what the people are using in real life deployments. > Any more thoughts? > > Regards > Diego > > >> **** >> >> ** ** >> >> Thanks**** >> >> ** ** >> >> *De :* openstack-operators-bounces at lists.openstack.org [mailto: >> openstack-operators-bounces at lists.openstack.org] *De la part de* Diego >> Parrilla >> *Envoy? :* 20 octobre 2011 04:54 >> *? :* Till Mossakowski >> *Cc :* openstack-operators at lists.openstack.org >> *Objet :* Re: [Openstack-operators] Starting large VMs takes quite long** >> ** >> >> ** ** >> >> Hi, **** >> >> ** ** >> >> my answers below.**** >> >> ** ** >> >> On Wed, Oct 19, 2011 at 8:12 PM, Till Mossakowski < >> Till.Mossakowski at dfki.de> wrote:**** >> >> Hi,**** >> >> my answers below,**** >> >> >> many thanks for your quick answer.**** >> >> ** ** >> >> I have set up openstack using stackops. >> >> >> Good choice ;-)**** >> >> ** ** >> >> Yes, the stackops GUI is very nice. However, stackops is based on cactus, >> right? Is there a way of using diablo with stackops? Perhaps it is possible >> to upgrade the Ubuntu lucid distro that is coming with stackops to natty or >> oneiric and then upgrade to diablo using the source >> ppa:openstack-release/2011.3 for openstack?**** >> >> ** ** >> >> Yes, the 0.3 version with Diablo release is coming. We detected some QA >> issues. But things are working much better now.**** >> >> **** >> >> ** ** >> >> 5GB image it's not too big... we use NFS to share instances among nodes >> to help with the live migration and performance it's acceptable. How >> much is 'quite a while' in seconds?**** >> >> ** ** >> >> between half a minute and a minute (I haven't taken the exact time...). >> This is too long for our users.**** >> >> ** ** >> >> If the virtual disks are cached, launching a 40GB virtual machine takes >> less than 5 seconds in our test platform (IBM x3550M3 Dual Xeon 5620 64GB >> with NFS as shared storage on 1Gb)**** >> >> **** >> >> ** ** >> >> If you share the /var/lib/nova/instances with NFS, during the 'launch' >> process the base virtual image is copied to '_base'. Depending on the >> size of this file it will take longer. Once it's copied next time you >> use this image it should go much faster. >> >> Note: I have tested right now with a 1Gb launching a >25GB Windows VM >> and it took 3-4 minutes the first time. New Windows images, it took only >> a few seconds.**** >> >> ** ** >> >> This is interesting. Is there a way of telling the scheduler to prefer a >> compute node that already has copied the needed image?**** >> >> ** ** >> >> Try this:**** >> >> ** ** >> >> 1) Configure the compute nodes to use a shared directory with NFS on >> /var/lib/nova/instances**** >> >> 2) Launch ALL the virtual disks you need at runtime. It will take a while >> the first time.**** >> >> 3) Virtual disks are now cached in /var/lib/nova/instances/_base**** >> >> 4) Try to launch now the virtual disks again. They should start very fast. >> **** >> >> ** ** >> >> If you need some kind of assistance, please let me know.**** >> >> ** ** >> >> Regards**** >> >> Diego**** >> >> **** >> >> >> >> Best, Till >> >> -- >> Prof. Dr. Till Mossakowski Cartesium, room 2.51 Phone +49-421-218-64226 >> DFKI GmbH Bremen Fax +49-421-218-9864226 >> Safe & Secure Cognitive Systems Till.Mossakowski at dfki.de >> Enrique-Schmidt-Str. 5, D-28359 Bremen http://www.dfki.de/sks/till >> >> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH >> principal office, *not* the address for mail etc.!!!: >> Trippstadter Str. 122, D-67663 Kaiserslautern >> management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff >> supervisory board: Prof. Hans A. Aukes (chair) >> Amtsgericht Kaiserslautern, HRB 2313**** >> >> ** ** >> >> >> -- **** >> >> Diego Parrilla >> *CEO* >> *www.stackops.com | * diego.parrilla at stackops.com | +34 649 94 43 29| skype:diegoparrilla >> * >> * **** >> >> *[image: Description : Image supprim?e par l'exp?diteur.]*** >> >> ******************** ADVERTENCIA LEGAL ******************** >> Le informamos, como destinatario de este mensaje, que el correo >> electr?nico y las comunicaciones por medio de Internet no permiten asegurar >> ni garantizar la confidencialidad de los mensajes transmitidos, as? como >> tampoco su integridad o su correcta recepci?n, por lo que STACKOPS >> TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. >> Si no consintiese en la utilizaci?n del correo electr?nico o de las >> comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro >> conocimiento de manera inmediata. Este mensaje va dirigido, de manera >> exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al >> secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso >> de haber recibido este mensaje por error, le rogamos que, de forma >> inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra >> atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento >> adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o >> utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, >> cualquiera que fuera su finalidad, est?n prohibidas por la ley. >> >> ***************** PRIVILEGED AND CONFIDENTIAL **************** >> We hereby inform you, as addressee of this message, that e-mail and >> Internet do not guarantee the confidentiality, nor the completeness or >> proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. >> does not assume any liability for those circumstances. Should you not agree >> to the use of e-mail or to communications via Internet, you are kindly >> requested to notify us immediately. This message is intended exclusively for >> the person to whom it is addressed and contains privileged and confidential >> information protected from disclosure by law. If you are not the addressee >> indicated in this message, you should immediately delete it and any >> attachments and notify the sender by reply e-mail. In such case, you are >> hereby notified that any dissemination, distribution, copying or use of this >> message or any attachments, for any purpose, is strictly prohibited by law. >> **** >> >> ** ** >> >> ** ** >> > > > -- > Diego Parrilla > *CEO* > *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 | > skype:diegoparrilla* > * > * > > * > > ******************** ADVERTENCIA LEGAL ******************** > Le informamos, como destinatario de este mensaje, que el correo electr?nico > y las comunicaciones por medio de Internet no permiten asegurar ni > garantizar la confidencialidad de los mensajes transmitidos, as? como > tampoco su integridad o su correcta recepci?n, por lo que STACKOPS > TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. > Si no consintiese en la utilizaci?n del correo electr?nico o de las > comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro > conocimiento de manera inmediata. Este mensaje va dirigido, de manera > exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al > secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso > de haber recibido este mensaje por error, le rogamos que, de forma > inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra > atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento > adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o > utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, > cualquiera que fuera su finalidad, est?n prohibidas por la ley. > > ***************** PRIVILEGED AND CONFIDENTIAL **************** > We hereby inform you, as addressee of this message, that e-mail and > Internet do not guarantee the confidentiality, nor the completeness or > proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. > does not assume any liability for those circumstances. Should you not agree > to the use of e-mail or to communications via Internet, you are kindly > requested to notify us immediately. This message is intended exclusively for > the person to whom it is addressed and contains privileged and confidential > information protected from disclosure by law. If you are not the addressee > indicated in this message, you should immediately delete it and any > attachments and notify the sender by reply e-mail. In such case, you are > hereby notified that any dissemination, distribution, copying or use of this > message or any attachments, for any purpose, is strictly prohibited by law. > > > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- .''`. Pienso, Luego Incordio : :' : `. `' `- www.debian.org www.hispalinux.es GPG Key: 26F020F7 GPG fingerprint: 4986 39DA D152 050B 4699 9A71 66DB 5A36 26F0 20F7 -------------- next part -------------- An HTML attachment was scrubbed... URL: From borsodp at staff.westminster.ac.uk Fri Oct 21 12:57:10 2011 From: borsodp at staff.westminster.ac.uk (Peter Borsody) Date: Fri, 21 Oct 2011 13:57:10 +0100 Subject: [Openstack-operators] nova-network assigned IP address In-Reply-To: <5E3DCAE61C95FA4397679425D7275D26055C296B3B@HQ-MX03.us.teo.earth> References: <5E3DCAE61C95FA4397679425D7275D26055C296B3B@HQ-MX03.us.teo.earth> Message-ID: Hi, I had the exactly same problem.So, I patched the nova source code to work, added some option to the point of dnsmasq managing. Cheers, Peter On 20 October 2011 19:00, de Jong, Mark-Jan wrote: > Hello, > > Is there a way to assign an IP address to nova-network other than the > default gateway of the network? I want my guests to be directly connected to > the ?public? network and don?t want nova-network to act as my router. I just > need it for DHCP. Is this possible? > > > > Thanks! > > > > ,.,.,.,..,...,.,..,..,..,...,....,..,..,....,.... > > Mark-Jan de Jong > > O | 703-259-4406 > > C | 703-254-6284 > > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wittwerch at gmail.com Fri Oct 21 14:58:47 2011 From: wittwerch at gmail.com (Christian Wittwer) Date: Fri, 21 Oct 2011 16:58:47 +0200 Subject: [Openstack-operators] Starting large VMs takes quite long In-Reply-To: References: <4E9EF4F4.2080406@dfki.de> <4E9F12F8.30207@dfki.de> Message-ID: "The Gluster Connector for OpenStack", it's ridiculous. Have a look at the docs, they've done nothing concering Openstack compute. => http://www.gluster.com/wp-content/uploads/2011/07/Gluster-Openstack-VM-storage-v1-shehjar.pdf They just create a normal gluster volume and store the vms on it. That was even possible before, ?I had that setup running long before. Christian > 2011/10/21 ghe. rivero >> >> Hi, >> Talking about live-migration and shared mount points, has anyone have the chance to try glusterfs connector? They claim to be able to: "Instantly boot VMs using a mountable filesystem interface ? no more fetching the entire VM image before booting" (http://www.gluster.com/2011/07/27/glusters-shiny-new-connector-for-openstack/) >> See you! >> ? ? Ghe Rivero >> On Thu, Oct 20, 2011 at 4:03 PM, Diego Parrilla wrote: >>> >>> Hi, >>> my answers below, >>> >>> On Thu, Oct 20, 2011 at 3:47 PM, Boris-Michel Deschenes wrote: >>>> >>>> Hi guys, >>>> >>>> >>>> >>>> Just a quick note, I had this setup at some point (NFS-mounted /var/lib/nova/instances) which is essential to get live VM migrations if I?m not mistaken (live migration was working perfectly).? The problem I had with this setup was that the VM startup time was considerably slower than when the images were residing on a local disk (and I mean, even after all images are ?cached?). >>> >>> It's true. The fastest disk and the closest to the drive the better. >>> >>>> >>>> >>>> >>>> Basically an image will start the fastest when it is cached locally (local drive) >>> >>> Correct. >>> >>>> >>>> Then, not quite as fast when cached but on a NFS-mounted directory >>> >>> Correct. It takes some time to create the local disks. It's very important to have a good connection to the shared file system (it's not mandatory to use NFS). >>> >>>> >>>> Then really slowly when residing entirely on another disk and needed to be written locally to be cached. >>> >>> Right, it can take several minutes on a 1Gb. >>> >>>> >>>> >>>> >>>> These are the observations I made but I realize other factors weigh in (SAS vs SATA disk, network speed, etc.)? Please advise if you get the same speed in NFS-cached vs local-cached setup as it might convince me to go back to an NFS share (also were you using SAS disks to serve the NFS?). >>> >>> No, the performance on local disk is much higher than running a NFS on a 1Gb. For my perspective not only live migration is a must for our customers, but also the local virtual disks must persists a catastrophic failure of a nova-compute. That's the reason why recommend 10Gb and a good performant NFS file server connected. 15K or 10K SAS is not so relevant, the bottleneck is the network (speed and latency). There are also good solutions combining 10Gb + SSD Cache disks + 7.2KRPM SAS/SATA disks. >>> I would like to know what the people are using in real life deployments. Any more thoughts? >>> Regards >>> Diego >>> >>>> >>>> >>>> >>>> Thanks >>>> >>>> >>>> >>>> De?: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] De la part de Diego Parrilla >>>> Envoy??: 20 octobre 2011 04:54 >>>> ??: Till Mossakowski >>>> Cc?: openstack-operators at lists.openstack.org >>>> Objet?: Re: [Openstack-operators] Starting large VMs takes quite long >>>> >>>> >>>> >>>> Hi, >>>> >>>> >>>> >>>> my answers below. >>>> >>>> >>>> >>>> On Wed, Oct 19, 2011 at 8:12 PM, Till Mossakowski wrote: >>>> >>>> Hi, >>>> >>>> my answers below, >>>> >>>> many thanks for your quick answer. >>>> >>>> >>>> >>>> ? ?I have set up openstack using stackops. >>>> >>>> >>>> Good choice ;-) >>>> >>>> >>>> >>>> Yes, the stackops GUI is very nice. However, stackops is based on cactus, right? Is there a way of using diablo with stackops? Perhaps it is possible to upgrade the Ubuntu lucid distro that is coming with stackops to natty or oneiric and then upgrade to diablo using the source ppa:openstack-release/2011.3 for openstack? >>>> >>>> >>>> >>>> Yes, the 0.3 version with Diablo release is coming. We detected some QA issues. But things are working much better now. >>>> >>>> >>>> >>>> >>>> >>>> 5GB image it's not too big... we use NFS to share instances among nodes >>>> to help with the live migration and performance it's acceptable. How >>>> much is 'quite a while' in seconds? >>>> >>>> >>>> >>>> between half a minute and a minute (I haven't taken the exact time...). >>>> This is too long for our users. >>>> >>>> >>>> >>>> If the virtual disks are cached, launching a 40GB virtual machine takes less than 5 seconds in our test platform (IBM x3550M3 Dual Xeon 5620 64GB with NFS as shared storage on 1Gb) >>>> >>>> >>>> >>>> >>>> >>>> If you share the /var/lib/nova/instances with NFS, during the 'launch' >>>> process the base virtual image is copied to '_base'. Depending on the >>>> size of this file it will take longer. Once it's copied next time you >>>> use this image it should go much faster. >>>> >>>> Note: I have tested right now with a 1Gb launching a >25GB Windows VM >>>> and it took 3-4 minutes the first time. New Windows images, it took only >>>> a few seconds. >>>> >>>> >>>> >>>> This is interesting. Is there a way of telling the scheduler to prefer a compute node that already has copied the needed image? >>>> >>>> >>>> >>>> Try this: >>>> >>>> >>>> >>>> 1) Configure the compute nodes to use a shared directory with NFS on /var/lib/nova/instances >>>> >>>> 2) Launch ALL the virtual disks you need at runtime. It will take a while the first time. >>>> >>>> 3) Virtual disks are now cached in /var/lib/nova/instances/_base >>>> >>>> 4) Try to launch now the virtual disks again. They should start very fast. >>>> >>>> >>>> >>>> If you need some kind of assistance, please let me know. >>>> >>>> >>>> >>>> Regards >>>> >>>> Diego >>>> >>>> >>>> >>>> Best, Till >>>> >>>> -- >>>> Prof. Dr. Till Mossakowski ?Cartesium, room 2.51 Phone +49-421-218-64226 >>>> DFKI GmbH Bremen ? ? ? ? ? ? ? ? ? ? ? ? ? ? Fax +49-421-218-9864226 >>>> Safe & Secure Cognitive Systems ? ? ? ? ? ? Till.Mossakowski at dfki.de >>>> Enrique-Schmidt-Str. 5, D-28359 Bremen ? http://www.dfki.de/sks/till >>>> >>>> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH >>>> principal office, *not* the address for mail etc.!!!: >>>> Trippstadter Str. 122, D-67663 Kaiserslautern >>>> management board: Prof. Wolfgang Wahlster (chair), Dr. Walter Olthoff >>>> supervisory board: Prof. Hans A. Aukes (chair) >>>> Amtsgericht Kaiserslautern, HRB 2313 >>>> >>>> >>>> >>>> -- >>>> >>>> Diego Parrilla >>>> CEO >>>> www.stackops.com?|??diego.parrilla at stackops.com?|?+34 649 94 43 29 |?skype:diegoparrilla >>>> >>>> ******************** ADVERTENCIA LEGAL ******************** >>>> Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. >>>> >>>> ***************** PRIVILEGED AND CONFIDENTIAL **************** >>>> We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. >>>> >>>> >>>> >>>> >>> >>> -- >>> Diego Parrilla >>> CEO >>> www.stackops.com?|??diego.parrilla at stackops.com?|?+34 649 94 43 29 |?skype:diegoparrilla >>> >>> ******************** ADVERTENCIA LEGAL ******************** >>> Le informamos, como destinatario de este mensaje, que el correo electr?nico y las comunicaciones por medio de Internet no permiten asegurar ni garantizar la confidencialidad de los mensajes transmitidos, as? como tampoco su integridad o su correcta recepci?n, por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias. Si no consintiese en la utilizaci?n del correo electr?nico o de las comunicaciones v?a Internet le rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene informaci?n confidencial y sujeta al secreto profesional, cuya divulgaci?n no est? permitida por la ley. En caso de haber recibido este mensaje por error, le rogamos que, de forma inmediata, nos lo comunique mediante correo electr?nico remitido a nuestra atenci?n y proceda a su eliminaci?n, as? como a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la distribuci?n, copia o utilizaci?n de este mensaje, o de cualquier documento adjunto al mismo, cualquiera que fuera su finalidad, est?n prohibidas por la ley. >>> >>> ***************** PRIVILEGED AND CONFIDENTIAL **************** >>> We hereby inform you, as addressee of this message, that e-mail and Internet do not guarantee the confidentiality, nor the completeness or proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not assume any liability for those circumstances. Should you not agree to the use of e-mail or to communications via Internet, you are kindly requested to notify us immediately. This message is intended exclusively for the person to whom it is addressed and contains privileged and confidential information protected from disclosure by law. If you are not the addressee indicated in this message, you should immediately delete it and any attachments and notify the sender by reply e-mail. In such case, you are hereby notified that any dissemination, distribution, copying or use of this message or any attachments, for any purpose, is strictly prohibited by law. >>> >>> >>> _______________________________________________ >>> Openstack-operators mailing list >>> Openstack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> >> >> >> -- >> ?.''`.? Pienso, Luego Incordio >> : :' : >> `. `' >> ? `-? ? www.debian.org? ? www.hispalinux.es >> >> GPG Key: 26F020F7 >> GPG fingerprint: 4986 39DA D152 050B 4699? 9A71 66DB 5A36 26F0 20F7 >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > From wittwerch at gmail.com Fri Oct 21 15:01:38 2011 From: wittwerch at gmail.com (Christian Wittwer) Date: Fri, 21 Oct 2011 17:01:38 +0200 Subject: [Openstack-operators] nova-network assigned IP address In-Reply-To: References: <5E3DCAE61C95FA4397679425D7275D26055C296B3B@HQ-MX03.us.teo.earth> Message-ID: You can overwrite the gateway which dnsmasq should provide via dhcp. Works fine. foo:~# cat /etc/nova/dnsmasq.conf dhcp-option=3,10.2.20.1 Cheers, Christian 2011/10/21 Peter Borsody : > Hi, > > I had the exactly same problem.So, I patched the nova source code to work, > added some option to the point of dnsmasq managing. > Cheers, > Peter > On 20 October 2011 19:00, de Jong, Mark-Jan wrote: >> >> Hello, >> >> Is there a way to assign an IP address to nova-network other than the >> default gateway of the network? I want my guests to be directly connected to >> the ?public? network and don?t want nova-network to act as my router. I just >> need it for DHCP. Is this possible? >> >> >> >> Thanks! >> >> >> >> ,.,.,.,..,...,.,..,..,..,...,....,..,..,....,.... >> >> Mark-Jan de Jong >> >> O | 703-259-4406 >> >> C | 703-254-6284 >> >> >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > From slyphon at gmail.com Mon Oct 24 14:13:51 2011 From: slyphon at gmail.com (Jonathan Simms) Date: Mon, 24 Oct 2011 10:13:51 -0400 Subject: [Openstack-operators] XFS documentation seems to conflict with recommendations in Swift In-Reply-To: References: <395F6A92-D224-4A3D-BEC5-87625204DC93@zeroaccess.org> Message-ID: Thanks all for the information! I'm going to use this advice as part of the next round of hardware purchasing we're doing. On Thu, Oct 13, 2011 at 6:11 PM, Gordon Irving wrote: > > > If you are on a Battery Backed Unit raid controller, then its generally safe > to disable barriers for journal filesystems.? If your doing soft raid, jbod, > single disk arrays or cheaped out and did not get a BBU then you may want to > enable barriers for filesystem consistency. > > > > For raid cards with a BBU then set your io scheduler to noop, and disable > barriers.? The raid card does its own re-ordering of io operations, the OS > has an incomplete picture of the true drive geometry. ?The raid card is > emulating one disk geometry which could be an array of 2 ? 100+ disks.? The > OS simply can not make good judgment calls on how best to schedule io to > different parts of the disk because its built around the assumption of a > single spinning disk.? This is also true for if a write has made it safely > non persistent cache (ie disk cache), ?to a persistent cache (ie the battery > in your raid card) or persistent storage (that array of disks) .? ???This is > a failure of the Raid card <-> OS interface.? There simply is not the > richness to say (signal write is ok if on platter or persistent cache not > okay in disk cache) or > > > > Enabling barriers effectively turns all writes into Write-Through > operations, so the write goes straight to the disk platter and you get > little performance benefit from the raid card (which hurts a lot in terms of > lost iops). ??If the BBU looses charge/fails ?then the raid controller > downgrades to Write-Through (vs Write-Backed) operation. > > > > BBU ?raid controllers disable disk caches, as these are not safe in event of > power loss, and do not provide any benefit over the raid card cache. > > > > In the context of swift, hdfs and other highly replicated datastores, I run > them in jbod or raid-0 + nobarrier , noatime, nodiratime with a filesystem > aligned to the geometry of underlying storage* ?etc to squeeze as much > performance as possible out of the raw storage.? Let the application layer > deal with redundancy of data across the network, if a machine /disk dies ? > so what, you have N other copies of that data elsewhere on the network.? A > bit of storage is lost ? do consider how many nodes can be down at any time > when operating these sorts of clusters Big boxen with lots of storage may > seem attractive from a density perspective until you loose one and 25% of > your storage capacity with it ? many smaller baskets ? > > > > For network level data consistency ?swift should have a ?data scrubber > (periodic process to read and compare checksums of replicated blocks), I > have not checked if this is implemented or on the roadmap.?? I would be very > surprised if this was not a part of swift. > > > > *you can hint to the fs layer how to offset block writes by specifying a > stride width which is the number of data carrying disks in the array and the > block size typically the default is 64k for raid arrays > > > > From: openstack-operators-bounces at lists.openstack.org > [mailto:openstack-operators-bounces at lists.openstack.org] On Behalf Of Cole > Crawford > Sent: 13 October 2011 13:51 > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] XFS documentation seems to conflict with > recommendations in Swift > > > > generally mounting with -o nobarrier is a bad idea (ext4 or xfs), unless?you > have disks that do not have write caches. don't follow that > > recommendation, or for example - fsync won't work which is something swift > relies?upon. > > > > > > On Thu, Oct 13, 2011 at 9:18 AM, Marcelo Martins > wrote: > > Hi Jonathan, > > > > > > I guess that will depend on how your storage nodes are configured (hardware > wise). ?The reason why it's recommended is because the storage drives are > actually attached to a controller that has RiW cache enabled. > > > > > > > > Q. Should barriers be enabled with storage which has a persistent write > cache? > > Many hardware RAID have a persistent write cache which preserves it across > power failure, interface resets, system crashes, etc. Using write barriers > in this instance is not recommended and will in fact lower performance. > Therefore, it is recommended to turn off the barrier support and mount the > filesystem with "nobarrier". But take care about the hard disk write cache, > which should be off. > > > > > > Marcelo Martins > > Openstack-swift > > btorch-os at zeroaccess.org > > > > ?Knowledge is the wings on which our aspirations take flight and soar. When > it comes to surfing and life if you know what to do you can do it. If you > desire anything become educated about it and succeed. ? > > > > > > > > On Oct 12, 2011, at 10:08 AM, Jonathan Simms wrote: > > Hello all, > > I'm in the middle of a 120T Swift deployment, and I've had some > concerns about the backing filesystem. I formatted everything with > ext4 with 1024b inodes (for storing xattrs), but the process took so > long that I'm now looking at XFS again. In particular, this concerns > me http://xfs.org/index.php/XFS_FAQ#Write_barrier_support. > > In the swift documentation, it's recommended to mount the filesystems > w/ 'nobarrier', but it would seem to me that this would leave the data > open to corruption in the case of a crash. AFAIK, swift doesn't do > checksumming (and checksum checking) of stored data (after it is > written), which would mean that any data corruption would silently get > passed back to the users. > > Now, I haven't had operational experience running XFS in production, > I've mainly used ZFS, JFS, and ext{3,4}. Are there any recommendations > for using XFS safely in production? > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ________________________________ > Sophos Limited, The Pentagon, Abingdon Science Park, Abingdon, OX14 3YP, > United Kingdom. > Company Reg No 2096520. VAT Reg No GB 991 2418 08. > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > From J.O'Loughlin at surrey.ac.uk Tue Oct 25 21:23:11 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Tue, 25 Oct 2011 22:23:11 +0100 Subject: [Openstack-operators] swift accounts V users Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA4@EXMB01CMS.surrey.ac.uk> Hi All, having problems understanding the concept of a swift account and how it relates to a user. Can anybody provide an explanation? Can an account have multiple users associated with it? Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From andi.abes at gmail.com Tue Oct 25 22:42:29 2011 From: andi.abes at gmail.com (andi abes) Date: Tue, 25 Oct 2011 18:42:29 -0400 Subject: [Openstack-operators] swift accounts V users In-Reply-To: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA4@EXMB01CMS.surrey.ac.uk> References: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA4@EXMB01CMS.surrey.ac.uk> Message-ID: <8311731865945825881@unknownmsgid> An account maps to a tenant or a customer, and yes it can have many users and many containers. Access control is per user On Oct 25, 2011, at 17:24, "J.O'Loughlin at surrey.ac.uk" wrote: > > Hi All, > > having problems understanding the concept of a swift account and how it relates to a user. Can anybody provide an explanation? > Can an account have multiple users associated with it? > > Regards > > John O'Loughlin > FEPS IT, Service Delivery Team Leader > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From J.O'Loughlin at surrey.ac.uk Wed Oct 26 08:54:52 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Wed, 26 Oct 2011 09:54:52 +0100 Subject: [Openstack-operators] glance and swift Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA7@EXMB01CMS.surrey.ac.uk> Hi All, Has anybody managed to configure glance to use swift? I've created a glance account and user on swift and can upload files: >swift list -A https://127.0.0.1:8080/auth/v1.0/ -U glance:glance -K glance glance_bucket virtualization-2edition.pdf Now, I'm truing to update glance config, /etc/glance/glance-api.conf default_store = swift swift_store_auth_address = https://131.227.75.25:8080/auth/ swift_store_user = glance swift_store_key=glance swift_store_container = glance_bucket and restart glance, but when I upload images into nova they are ending up in local filesystem /var/lib/glance/images instead of in swift. Any help appreciated. Kind Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From betodalas at gmail.com Wed Oct 26 09:09:33 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 07:09:33 -0200 Subject: [Openstack-operators] Xen With Openstack Message-ID: Hello, I installed Compute and New Glance a separate server. I'm trying to create VM on Xen by Dashboard. The panel is the pending status logs and shows that XenServer's picking up the image of the Glance, but the machine is not created. Follow the log: [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin R:cdbc860b307a|audit] Host.call_plugin host = '9b3736e1-18ef-4147-8564-a9c64ed3 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; args = [ params: (dp0 S'auth_token' p1 NsS'glance_port' p2 I9292 sS'uuid_stack' p3 (lp4 S'a343ec2f-ad1c-4632-b7d9-1add8051c241' p5 aS'4b27c364-6626-4541-896a-65fb0d0b01d3' p6 asS'image_id' p7 S'4' p8 sS'glance_host' p9 S'10.168.1.30' p10 sS'sr_path' p11 S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' p12 s. ] [20111024T14:18:24.251Z| info|xenserver-opstack|746637|Async.host.call_plugin R:223f6eebc13d|dispatcher] spawning a new thread to handle the current task (tr ackid=a043138728544674d13b8d4a8ff673f7) [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin R:223f6eebc13d|audit] Host.call_plugin host = '9b3736e1-18ef-4147-8564-a9c64ed3 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args = [ ] [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] xe host-list username=root password=null Follow the nova.conf: --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --verbose #--libvirt_type=xen --s3_host=10.168.1.32 --rabbit_host=10.168.1.32 --cc_host=10.168.1.32 --ec2_url=http://10.168.1.32:8773/services/Cloud --fixed_range=192.168.1.0/24 --network_size=250 --ec2_api=10.168.1.32 --routing_source_ip=10.168.1.32 --verbose --sql_connection=mysql://root:status64 at 10.168.1.32/nova --network_manager=nova.network.manager.FlatManager --glance_api_servers=10.168.1.32:9292 --image_service=nova.image.glance.GlanceImageService --flat_network_bridge=xenbr0 --connection_type=xenapi --xenapi_connection_url=https://10.168.1.31 --xenapi_connection_username=root --xenapi_connection_password=status64 --reboot_timeout=600 --rescue_timeout=86400 --resize_confirm_window=86400 --allow_resize_to_same_host New log-in information compute.log shows cpu, memory, about Xen Sevres, but does not create machines. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.civitella at gmail.com Wed Oct 26 09:32:36 2011 From: giuseppe.civitella at gmail.com (Giuseppe Civitella) Date: Wed, 26 Oct 2011 11:32:36 +0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: Hi, did you check what happens on XenServer's dom0? Are there some pending gzip processes? Deploy of vhd images can fail if they're are not properly created. You can find the rigth procedure here: https://answers.launchpad.net/nova/+question/161683 Hope it helps Giuseppe 2011/10/26 Roberto Dalas Z. Benavides : > Hello, I installed Compute and New Glance a separate server. I'm trying to > create VM on Xen by Dashboard. The panel is the pending status logs and > shows that XenServer's picking up the image of the Glance, but the machine > is not created. Follow the log: > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > R:cdbc860b307a|audit] Host.call_plugin host = > '9b3736e1-18ef-4147-8564-a9c64ed3 > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; args = [ > params: (dp0 > S'auth_token' > p1 > NsS'glance_port' > p2 > I9292 > sS'uuid_stack' > p3 > (lp4 > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > p5 > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > p6 > asS'image_id' > p7 > S'4' > p8 > sS'glance_host' > p9 > S'10.168.1.30' > p10 > sS'sr_path' > p11 > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > p12 > s. ] > [20111024T14:18:24.251Z| > info|xenserver-opstack|746637|Async.host.call_plugin > R:223f6eebc13d|dispatcher] spawning a new thread to handle the current task > (tr > ackid=a043138728544674d13b8d4a8ff673f7) > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > R:223f6eebc13d|audit] Host.call_plugin host = > '9b3736e1-18ef-4147-8564-a9c64ed3 > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args = [ ] > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] xe > host-list username=root password=null > > Follow the nova.conf: > > --dhcpbridge_flagfile=/etc/nova/nova.conf > --dhcpbridge=/usr/bin/nova-dhcpbridge > --logdir=/var/log/nova > --state_path=/var/lib/nova > --lock_path=/var/lock/nova > --verbose > > #--libvirt_type=xen > --s3_host=10.168.1.32 > --rabbit_host=10.168.1.32 > --cc_host=10.168.1.32 > --ec2_url=http://10.168.1.32:8773/services/Cloud > --fixed_range=192.168.1.0/24 > --network_size=250 > --ec2_api=10.168.1.32 > --routing_source_ip=10.168.1.32 > --verbose > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > --network_manager=nova.network.manager.FlatManager > --glance_api_servers=10.168.1.32:9292 > --image_service=nova.image.glance.GlanceImageService > --flat_network_bridge=xenbr0 > --connection_type=xenapi > --xenapi_connection_url=https://10.168.1.31 > --xenapi_connection_username=root > --xenapi_connection_password=status64 > --reboot_timeout=600 > --rescue_timeout=86400 > --resize_confirm_window=86400 > --allow_resize_to_same_host > > New log-in information compute.log shows cpu, memory, about Xen Sevres, but > does not create machines. > > Thanks > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > From betodalas at gmail.com Wed Oct 26 09:43:56 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 07:43:56 -0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: A doubt, the new server, compute, must be within a XenServer virtual machine ? The image must actually be as gzip, or you can get on the same Glance as vhd ? 2011/10/26 Giuseppe Civitella > Hi, > > did you check what happens on XenServer's dom0? > Are there some pending gzip processes? > Deploy of vhd images can fail if they're are not properly created. > You can find the rigth procedure here: > https://answers.launchpad.net/nova/+question/161683 > > Hope it helps > Giuseppe > > > > 2011/10/26 Roberto Dalas Z. Benavides : > > Hello, I installed Compute and New Glance a separate server. I'm trying > to > > create VM on Xen by Dashboard. The panel is the pending status logs and > > shows that XenServer's picking up the image of the Glance, but the > machine > > is not created. Follow the log: > > > > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > > R:cdbc860b307a|audit] Host.call_plugin host = > > '9b3736e1-18ef-4147-8564-a9c64ed3 > > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; args = > [ > > params: (dp0 > > S'auth_token' > > p1 > > NsS'glance_port' > > p2 > > I9292 > > sS'uuid_stack' > > p3 > > (lp4 > > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > > p5 > > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > > p6 > > asS'image_id' > > p7 > > S'4' > > p8 > > sS'glance_host' > > p9 > > S'10.168.1.30' > > p10 > > sS'sr_path' > > p11 > > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > > p12 > > s. ] > > [20111024T14:18:24.251Z| > > info|xenserver-opstack|746637|Async.host.call_plugin > > R:223f6eebc13d|dispatcher] spawning a new thread to handle the current > task > > (tr > > ackid=a043138728544674d13b8d4a8ff673f7) > > > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > > R:223f6eebc13d|audit] Host.call_plugin host = > > '9b3736e1-18ef-4147-8564-a9c64ed3 > > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args = [ > ] > > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] xe > > host-list username=root password=null > > > > Follow the nova.conf: > > > > --dhcpbridge_flagfile=/etc/nova/nova.conf > > --dhcpbridge=/usr/bin/nova-dhcpbridge > > --logdir=/var/log/nova > > --state_path=/var/lib/nova > > --lock_path=/var/lock/nova > > --verbose > > > > #--libvirt_type=xen > > --s3_host=10.168.1.32 > > --rabbit_host=10.168.1.32 > > --cc_host=10.168.1.32 > > --ec2_url=http://10.168.1.32:8773/services/Cloud > > --fixed_range=192.168.1.0/24 > > --network_size=250 > > --ec2_api=10.168.1.32 > > --routing_source_ip=10.168.1.32 > > --verbose > > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > > --network_manager=nova.network.manager.FlatManager > > --glance_api_servers=10.168.1.32:9292 > > --image_service=nova.image.glance.GlanceImageService > > --flat_network_bridge=xenbr0 > > --connection_type=xenapi > > --xenapi_connection_url=https://10.168.1.31 > > --xenapi_connection_username=root > > --xenapi_connection_password=status64 > > --reboot_timeout=600 > > --rescue_timeout=86400 > > --resize_confirm_window=86400 > > --allow_resize_to_same_host > > > > New log-in information compute.log shows cpu, memory, about Xen Sevres, > but > > does not create machines. > > > > Thanks > > _______________________________________________ > > Openstack-operators mailing list > > Openstack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.civitella at gmail.com Wed Oct 26 10:00:29 2011 From: giuseppe.civitella at gmail.com (Giuseppe Civitella) Date: Wed, 26 Oct 2011 12:00:29 +0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: Yes, the nova-compute service has to run on a domU. You need to install XenServer's plugins on dom0 (have a look here: http://wiki.openstack.org/XenServerDevelopment). The domU will tell the dom0 to deploy images via xenapi. You need to extract you vhd image, rename it image.vhd and then gzip it. Glance plugin on XenServer expect vhd images to be gzipped, so if you don't compress them the deploy process will fail. Cheers, Giuseppe 2011/10/26 Roberto Dalas Z. Benavides : > A doubt, the new server, compute, must be within a XenServer virtual > machine? > The image must actually be as gzip, or you can get on the same Glance as > vhd? > > 2011/10/26 Giuseppe Civitella >> >> Hi, >> >> did you check what happens on XenServer's dom0? >> Are there some pending gzip processes? >> Deploy of vhd images can fail if they're are not properly created. >> You can find the rigth procedure here: >> https://answers.launchpad.net/nova/+question/161683 >> >> Hope it helps >> Giuseppe >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> > Hello, I installed Compute and New Glance a separate server. I'm trying >> > to >> > create VM on Xen by Dashboard. The panel is the pending status logs and >> > shows that XenServer's picking up the image of the Glance, but the >> > machine >> > is not created. Follow the log: >> > >> > >> > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin >> > R:cdbc860b307a|audit] Host.call_plugin host = >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; args >> > = [ >> > params: (dp0 >> > S'auth_token' >> > p1 >> > NsS'glance_port' >> > p2 >> > I9292 >> > sS'uuid_stack' >> > p3 >> > (lp4 >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' >> > p5 >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' >> > p6 >> > asS'image_id' >> > p7 >> > S'4' >> > p8 >> > sS'glance_host' >> > p9 >> > S'10.168.1.30' >> > p10 >> > sS'sr_path' >> > p11 >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' >> > p12 >> > s. ] >> > [20111024T14:18:24.251Z| >> > info|xenserver-opstack|746637|Async.host.call_plugin >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the current >> > task >> > (tr >> > ackid=a043138728544674d13b8d4a8ff673f7) >> > >> > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin >> > R:223f6eebc13d|audit] Host.call_plugin host = >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args = >> > [ ] >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] xe >> > host-list username=root password=null >> > >> > Follow the nova.conf: >> > >> > --dhcpbridge_flagfile=/etc/nova/nova.conf >> > --dhcpbridge=/usr/bin/nova-dhcpbridge >> > --logdir=/var/log/nova >> > --state_path=/var/lib/nova >> > --lock_path=/var/lock/nova >> > --verbose >> > >> > #--libvirt_type=xen >> > --s3_host=10.168.1.32 >> > --rabbit_host=10.168.1.32 >> > --cc_host=10.168.1.32 >> > --ec2_url=http://10.168.1.32:8773/services/Cloud >> > --fixed_range=192.168.1.0/24 >> > --network_size=250 >> > --ec2_api=10.168.1.32 >> > --routing_source_ip=10.168.1.32 >> > --verbose >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova >> > --network_manager=nova.network.manager.FlatManager >> > --glance_api_servers=10.168.1.32:9292 >> > --image_service=nova.image.glance.GlanceImageService >> > --flat_network_bridge=xenbr0 >> > --connection_type=xenapi >> > --xenapi_connection_url=https://10.168.1.31 >> > --xenapi_connection_username=root >> > --xenapi_connection_password=status64 >> > --reboot_timeout=600 >> > --rescue_timeout=86400 >> > --resize_confirm_window=86400 >> > --allow_resize_to_same_host >> > >> > New log-in information compute.log shows cpu, memory, about Xen Sevres, >> > but >> > does not create machines. >> > >> > Thanks >> > _______________________________________________ >> > Openstack-operators mailing list >> > Openstack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > >> > > > From betodalas at gmail.com Wed Oct 26 10:15:07 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 08:15:07 -0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: I have an image vmdk and am doing the following: add name = glance lucid_ovf disk_format container_format vhd = = = OVF is_public True > Yes, the nova-compute service has to run on a domU. > You need to install XenServer's plugins on dom0 (have a look here: > http://wiki.openstack.org/XenServerDevelopment). > The domU will tell the dom0 to deploy images via xenapi. > You need to extract you vhd image, rename it image.vhd and then gzip it. > Glance plugin on XenServer expect vhd images to be gzipped, so if you > don't compress them the deploy process will fail. > > Cheers, > Giuseppe > > > > > > > 2011/10/26 Roberto Dalas Z. Benavides : > > A doubt, the new server, compute, must be within a XenServer virtual > > machine? > > The image must actually be as gzip, or you can get on the same Glance as > > vhd? > > > > 2011/10/26 Giuseppe Civitella > >> > >> Hi, > >> > >> did you check what happens on XenServer's dom0? > >> Are there some pending gzip processes? > >> Deploy of vhd images can fail if they're are not properly created. > >> You can find the rigth procedure here: > >> https://answers.launchpad.net/nova/+question/161683 > >> > >> Hope it helps > >> Giuseppe > >> > >> > >> > >> 2011/10/26 Roberto Dalas Z. Benavides : > >> > Hello, I installed Compute and New Glance a separate server. I'm > trying > >> > to > >> > create VM on Xen by Dashboard. The panel is the pending status logs > and > >> > shows that XenServer's picking up the image of the Glance, but the > >> > machine > >> > is not created. Follow the log: > >> > > >> > > >> > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > >> > R:cdbc860b307a|audit] Host.call_plugin host = > >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; > args > >> > = [ > >> > params: (dp0 > >> > S'auth_token' > >> > p1 > >> > NsS'glance_port' > >> > p2 > >> > I9292 > >> > sS'uuid_stack' > >> > p3 > >> > (lp4 > >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > >> > p5 > >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > >> > p6 > >> > asS'image_id' > >> > p7 > >> > S'4' > >> > p8 > >> > sS'glance_host' > >> > p9 > >> > S'10.168.1.30' > >> > p10 > >> > sS'sr_path' > >> > p11 > >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > >> > p12 > >> > s. ] > >> > [20111024T14:18:24.251Z| > >> > info|xenserver-opstack|746637|Async.host.call_plugin > >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the current > >> > task > >> > (tr > >> > ackid=a043138728544674d13b8d4a8ff673f7) > >> > > >> > > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > >> > R:223f6eebc13d|audit] Host.call_plugin host = > >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args > = > >> > [ ] > >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] > xe > >> > host-list username=root password=null > >> > > >> > Follow the nova.conf: > >> > > >> > --dhcpbridge_flagfile=/etc/nova/nova.conf > >> > --dhcpbridge=/usr/bin/nova-dhcpbridge > >> > --logdir=/var/log/nova > >> > --state_path=/var/lib/nova > >> > --lock_path=/var/lock/nova > >> > --verbose > >> > > >> > #--libvirt_type=xen > >> > --s3_host=10.168.1.32 > >> > --rabbit_host=10.168.1.32 > >> > --cc_host=10.168.1.32 > >> > --ec2_url=http://10.168.1.32:8773/services/Cloud > >> > --fixed_range=192.168.1.0/24 > >> > --network_size=250 > >> > --ec2_api=10.168.1.32 > >> > --routing_source_ip=10.168.1.32 > >> > --verbose > >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > >> > --network_manager=nova.network.manager.FlatManager > >> > --glance_api_servers=10.168.1.32:9292 > >> > --image_service=nova.image.glance.GlanceImageService > >> > --flat_network_bridge=xenbr0 > >> > --connection_type=xenapi > >> > --xenapi_connection_url=https://10.168.1.31 > >> > --xenapi_connection_username=root > >> > --xenapi_connection_password=status64 > >> > --reboot_timeout=600 > >> > --rescue_timeout=86400 > >> > --resize_confirm_window=86400 > >> > --allow_resize_to_same_host > >> > > >> > New log-in information compute.log shows cpu, memory, about Xen > Sevres, > >> > but > >> > does not create machines. > >> > > >> > Thanks > >> > _______________________________________________ > >> > Openstack-operators mailing list > >> > Openstack-operators at lists.openstack.org > >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.civitella at gmail.com Wed Oct 26 10:23:22 2011 From: giuseppe.civitella at gmail.com (Giuseppe Civitella) Date: Wed, 26 Oct 2011 12:23:22 +0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: It has to be a vhd image. You can try XenConverter to get a vhd from a vmdk. Cheers, Giuseppe 2011/10/26 Roberto Dalas Z. Benavides : > I have an image vmdk and am doing the following: > add name = glance lucid_ovf disk_format container_format vhd = = = OVF > is_public True > Thanks > > 2011/10/26 Giuseppe Civitella >> >> Yes, the nova-compute service has to run on a domU. >> You need to install XenServer's plugins on dom0 (have a look here: >> http://wiki.openstack.org/XenServerDevelopment). >> The domU will tell the dom0 to deploy images via xenapi. >> You need to extract you vhd image, rename it image.vhd and then gzip it. >> Glance plugin on XenServer expect vhd images to be gzipped, so if you >> don't compress them the deploy process will fail. >> >> Cheers, >> Giuseppe >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> > A doubt, the new server, compute, must be within a XenServer virtual >> > machine? >> > The image must actually be as gzip, or you can get on the same Glance as >> > vhd? >> > >> > 2011/10/26 Giuseppe Civitella >> >> >> >> Hi, >> >> >> >> did you check what happens on XenServer's dom0? >> >> Are there some pending gzip processes? >> >> Deploy of vhd images can fail if they're are not properly created. >> >> You can find the rigth procedure here: >> >> https://answers.launchpad.net/nova/+question/161683 >> >> >> >> Hope it helps >> >> Giuseppe >> >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> > Hello, I installed Compute and New Glance a separate server. I'm >> >> > trying >> >> > to >> >> > create VM on Xen by Dashboard. The panel is the pending status logs >> >> > and >> >> > shows that XenServer's picking up the image of the Glance, but the >> >> > machine >> >> > is not created. Follow the log: >> >> > >> >> > >> >> > >> >> > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin >> >> > R:cdbc860b307a|audit] Host.call_plugin host = >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; >> >> > args >> >> > = [ >> >> > params: (dp0 >> >> > S'auth_token' >> >> > p1 >> >> > NsS'glance_port' >> >> > p2 >> >> > I9292 >> >> > sS'uuid_stack' >> >> > p3 >> >> > (lp4 >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' >> >> > p5 >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' >> >> > p6 >> >> > asS'image_id' >> >> > p7 >> >> > S'4' >> >> > p8 >> >> > sS'glance_host' >> >> > p9 >> >> > S'10.168.1.30' >> >> > p10 >> >> > sS'sr_path' >> >> > p11 >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' >> >> > p12 >> >> > s. ] >> >> > [20111024T14:18:24.251Z| >> >> > info|xenserver-opstack|746637|Async.host.call_plugin >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the >> >> > current >> >> > task >> >> > (tr >> >> > ackid=a043138728544674d13b8d4a8ff673f7) >> >> > >> >> > >> >> > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin >> >> > R:223f6eebc13d|audit] Host.call_plugin host = >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; args >> >> > = >> >> > [ ] >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 unix-RPC||cli] >> >> > xe >> >> > host-list username=root password=null >> >> > >> >> > Follow the nova.conf: >> >> > >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge >> >> > --logdir=/var/log/nova >> >> > --state_path=/var/lib/nova >> >> > --lock_path=/var/lock/nova >> >> > --verbose >> >> > >> >> > #--libvirt_type=xen >> >> > --s3_host=10.168.1.32 >> >> > --rabbit_host=10.168.1.32 >> >> > --cc_host=10.168.1.32 >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud >> >> > --fixed_range=192.168.1.0/24 >> >> > --network_size=250 >> >> > --ec2_api=10.168.1.32 >> >> > --routing_source_ip=10.168.1.32 >> >> > --verbose >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova >> >> > --network_manager=nova.network.manager.FlatManager >> >> > --glance_api_servers=10.168.1.32:9292 >> >> > --image_service=nova.image.glance.GlanceImageService >> >> > --flat_network_bridge=xenbr0 >> >> > --connection_type=xenapi >> >> > --xenapi_connection_url=https://10.168.1.31 >> >> > --xenapi_connection_username=root >> >> > --xenapi_connection_password=status64 >> >> > --reboot_timeout=600 >> >> > --rescue_timeout=86400 >> >> > --resize_confirm_window=86400 >> >> > --allow_resize_to_same_host >> >> > >> >> > New log-in information compute.log shows cpu, memory, about Xen >> >> > Sevres, >> >> > but >> >> > does not create machines. >> >> > >> >> > Thanks >> >> > _______________________________________________ >> >> > Openstack-operators mailing list >> >> > Openstack-operators at lists.openstack.org >> >> > >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > >> >> > >> > >> > > > From betodalas at gmail.com Wed Oct 26 11:30:24 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 09:30:24 -0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: Can i use the command ? add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf > is_public True < imagem.vhd 2011/10/26 Giuseppe Civitella > It has to be a vhd image. > You can try XenConverter to get a vhd from a vmdk. > > Cheers, > Giuseppe > > 2011/10/26 Roberto Dalas Z. Benavides : > > I have an image vmdk and am doing the following: > > add name = glance lucid_ovf disk_format container_format vhd = = = OVF > > is_public True > > > Thanks > > > > 2011/10/26 Giuseppe Civitella > >> > >> Yes, the nova-compute service has to run on a domU. > >> You need to install XenServer's plugins on dom0 (have a look here: > >> http://wiki.openstack.org/XenServerDevelopment). > >> The domU will tell the dom0 to deploy images via xenapi. > >> You need to extract you vhd image, rename it image.vhd and then gzip it. > >> Glance plugin on XenServer expect vhd images to be gzipped, so if you > >> don't compress them the deploy process will fail. > >> > >> Cheers, > >> Giuseppe > >> > >> > >> > >> > >> > >> > >> 2011/10/26 Roberto Dalas Z. Benavides : > >> > A doubt, the new server, compute, must be within a XenServer virtual > >> > machine? > >> > The image must actually be as gzip, or you can get on the same Glance > as > >> > vhd? > >> > > >> > 2011/10/26 Giuseppe Civitella > >> >> > >> >> Hi, > >> >> > >> >> did you check what happens on XenServer's dom0? > >> >> Are there some pending gzip processes? > >> >> Deploy of vhd images can fail if they're are not properly created. > >> >> You can find the rigth procedure here: > >> >> https://answers.launchpad.net/nova/+question/161683 > >> >> > >> >> Hope it helps > >> >> Giuseppe > >> >> > >> >> > >> >> > >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> > Hello, I installed Compute and New Glance a separate server. I'm > >> >> > trying > >> >> > to > >> >> > create VM on Xen by Dashboard. The panel is the pending status logs > >> >> > and > >> >> > shows that XenServer's picking up the image of the Glance, but the > >> >> > machine > >> >> > is not created. Follow the log: > >> >> > > >> >> > > >> >> > > >> >> > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > >> >> > R:cdbc860b307a|audit] Host.call_plugin host = > >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; > >> >> > args > >> >> > = [ > >> >> > params: (dp0 > >> >> > S'auth_token' > >> >> > p1 > >> >> > NsS'glance_port' > >> >> > p2 > >> >> > I9292 > >> >> > sS'uuid_stack' > >> >> > p3 > >> >> > (lp4 > >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > >> >> > p5 > >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > >> >> > p6 > >> >> > asS'image_id' > >> >> > p7 > >> >> > S'4' > >> >> > p8 > >> >> > sS'glance_host' > >> >> > p9 > >> >> > S'10.168.1.30' > >> >> > p10 > >> >> > sS'sr_path' > >> >> > p11 > >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > >> >> > p12 > >> >> > s. ] > >> >> > [20111024T14:18:24.251Z| > >> >> > info|xenserver-opstack|746637|Async.host.call_plugin > >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the > >> >> > current > >> >> > task > >> >> > (tr > >> >> > ackid=a043138728544674d13b8d4a8ff673f7) > >> >> > > >> >> > > >> >> > > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > >> >> > R:223f6eebc13d|audit] Host.call_plugin host = > >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; > args > >> >> > = > >> >> > [ ] > >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 > unix-RPC||cli] > >> >> > xe > >> >> > host-list username=root password=null > >> >> > > >> >> > Follow the nova.conf: > >> >> > > >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf > >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge > >> >> > --logdir=/var/log/nova > >> >> > --state_path=/var/lib/nova > >> >> > --lock_path=/var/lock/nova > >> >> > --verbose > >> >> > > >> >> > #--libvirt_type=xen > >> >> > --s3_host=10.168.1.32 > >> >> > --rabbit_host=10.168.1.32 > >> >> > --cc_host=10.168.1.32 > >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud > >> >> > --fixed_range=192.168.1.0/24 > >> >> > --network_size=250 > >> >> > --ec2_api=10.168.1.32 > >> >> > --routing_source_ip=10.168.1.32 > >> >> > --verbose > >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > >> >> > --network_manager=nova.network.manager.FlatManager > >> >> > --glance_api_servers=10.168.1.32:9292 > >> >> > --image_service=nova.image.glance.GlanceImageService > >> >> > --flat_network_bridge=xenbr0 > >> >> > --connection_type=xenapi > >> >> > --xenapi_connection_url=https://10.168.1.31 > >> >> > --xenapi_connection_username=root > >> >> > --xenapi_connection_password=status64 > >> >> > --reboot_timeout=600 > >> >> > --rescue_timeout=86400 > >> >> > --resize_confirm_window=86400 > >> >> > --allow_resize_to_same_host > >> >> > > >> >> > New log-in information compute.log shows cpu, memory, about Xen > >> >> > Sevres, > >> >> > but > >> >> > does not create machines. > >> >> > > >> >> > Thanks > >> >> > _______________________________________________ > >> >> > Openstack-operators mailing list > >> >> > Openstack-operators at lists.openstack.org > >> >> > > >> >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.civitella at gmail.com Wed Oct 26 11:44:38 2011 From: giuseppe.civitella at gmail.com (Giuseppe Civitella) Date: Wed, 26 Oct 2011 13:44:38 +0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: If imagem.vhd is a gzipped tar archive containing a file called image.vhd, your command should work this way: glance add name=lucid_ovf disk_format=vhd container_format=ovf is_public=True < imagem.vhd 2011/10/26 Roberto Dalas Z. Benavides : > Can i use the command ? > > add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf > > is_public True < imagem.vhd > > 2011/10/26 Giuseppe Civitella >> >> It has to be a vhd image. >> You can try XenConverter to get a vhd from a vmdk. >> >> Cheers, >> Giuseppe >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> > I have an image vmdk and am doing the following: >> > add name = glance lucid_ovf disk_format container_format vhd = = = OVF >> > is_public True > > >> > Thanks >> > >> > 2011/10/26 Giuseppe Civitella >> >> >> >> Yes, the nova-compute service has to run on a domU. >> >> You need to install XenServer's plugins on dom0 (have a look here: >> >> http://wiki.openstack.org/XenServerDevelopment). >> >> The domU will tell the dom0 to deploy images via xenapi. >> >> You need to extract you vhd image, rename it image.vhd and then gzip >> >> it. >> >> Glance plugin on XenServer expect vhd images to be gzipped, so if you >> >> don't compress them the deploy process will fail. >> >> >> >> Cheers, >> >> Giuseppe >> >> >> >> >> >> >> >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> > A doubt, the new server, compute, must be within a XenServer virtual >> >> > machine? >> >> > The image must actually be as gzip, or you can get on the same Glance >> >> > as >> >> > vhd? >> >> > >> >> > 2011/10/26 Giuseppe Civitella >> >> >> >> >> >> Hi, >> >> >> >> >> >> did you check what happens on XenServer's dom0? >> >> >> Are there some pending gzip processes? >> >> >> Deploy of vhd images can fail if they're are not properly created. >> >> >> You can find the rigth procedure here: >> >> >> https://answers.launchpad.net/nova/+question/161683 >> >> >> >> >> >> Hope it helps >> >> >> Giuseppe >> >> >> >> >> >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> >> > Hello, I installed Compute and New Glance a separate server. I'm >> >> >> > trying >> >> >> > to >> >> >> > create VM on Xen by Dashboard. The panel is the pending status >> >> >> > logs >> >> >> > and >> >> >> > shows that XenServer's picking up the image of the Glance, but the >> >> >> > machine >> >> >> > is not created. Follow the log: >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin >> >> >> > R:cdbc860b307a|audit] Host.call_plugin host = >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = 'download_vhd'; >> >> >> > args >> >> >> > = [ >> >> >> > params: (dp0 >> >> >> > S'auth_token' >> >> >> > p1 >> >> >> > NsS'glance_port' >> >> >> > p2 >> >> >> > I9292 >> >> >> > sS'uuid_stack' >> >> >> > p3 >> >> >> > (lp4 >> >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' >> >> >> > p5 >> >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' >> >> >> > p6 >> >> >> > asS'image_id' >> >> >> > p7 >> >> >> > S'4' >> >> >> > p8 >> >> >> > sS'glance_host' >> >> >> > p9 >> >> >> > S'10.168.1.30' >> >> >> > p10 >> >> >> > sS'sr_path' >> >> >> > p11 >> >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' >> >> >> > p12 >> >> >> > s. ] >> >> >> > [20111024T14:18:24.251Z| >> >> >> > info|xenserver-opstack|746637|Async.host.call_plugin >> >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the >> >> >> > current >> >> >> > task >> >> >> > (tr >> >> >> > ackid=a043138728544674d13b8d4a8ff673f7) >> >> >> > >> >> >> > >> >> >> > >> >> >> > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin >> >> >> > R:223f6eebc13d|audit] Host.call_plugin host = >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; >> >> >> > args >> >> >> > = >> >> >> > [ ] >> >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 >> >> >> > unix-RPC||cli] >> >> >> > xe >> >> >> > host-list username=root password=null >> >> >> > >> >> >> > Follow the nova.conf: >> >> >> > >> >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf >> >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge >> >> >> > --logdir=/var/log/nova >> >> >> > --state_path=/var/lib/nova >> >> >> > --lock_path=/var/lock/nova >> >> >> > --verbose >> >> >> > >> >> >> > #--libvirt_type=xen >> >> >> > --s3_host=10.168.1.32 >> >> >> > --rabbit_host=10.168.1.32 >> >> >> > --cc_host=10.168.1.32 >> >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud >> >> >> > --fixed_range=192.168.1.0/24 >> >> >> > --network_size=250 >> >> >> > --ec2_api=10.168.1.32 >> >> >> > --routing_source_ip=10.168.1.32 >> >> >> > --verbose >> >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova >> >> >> > --network_manager=nova.network.manager.FlatManager >> >> >> > --glance_api_servers=10.168.1.32:9292 >> >> >> > --image_service=nova.image.glance.GlanceImageService >> >> >> > --flat_network_bridge=xenbr0 >> >> >> > --connection_type=xenapi >> >> >> > --xenapi_connection_url=https://10.168.1.31 >> >> >> > --xenapi_connection_username=root >> >> >> > --xenapi_connection_password=status64 >> >> >> > --reboot_timeout=600 >> >> >> > --rescue_timeout=86400 >> >> >> > --resize_confirm_window=86400 >> >> >> > --allow_resize_to_same_host >> >> >> > >> >> >> > New log-in information compute.log shows cpu, memory, about Xen >> >> >> > Sevres, >> >> >> > but >> >> >> > does not create machines. >> >> >> > >> >> >> > Thanks >> >> >> > _______________________________________________ >> >> >> > Openstack-operators mailing list >> >> >> > Openstack-operators at lists.openstack.org >> >> >> > >> >> >> > >> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> > >> >> >> > >> >> > >> >> > >> > >> > > > From J.O'Loughlin at surrey.ac.uk Wed Oct 26 11:48:59 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Wed, 26 Oct 2011 12:48:59 +0100 Subject: [Openstack-operators] glance and swift In-Reply-To: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA7@EXMB01CMS.surrey.ac.uk> References: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CA7@EXMB01CMS.surrey.ac.uk> Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CAA@EXMB01CMS.surrey.ac.uk> And this is the error message I'm seeing in the logs ERROR [glance.store.swift] Could not find swift_store_auth_address in configuration options Regards John O'Loughlin FEPS IT, Service Delivery Team Leader ________________________________________ From: openstack-operators-bounces at lists.openstack.org [openstack-operators-bounces at lists.openstack.org] On Behalf Of J.O'Loughlin at surrey.ac.uk [J.O'Loughlin at surrey.ac.uk] Sent: 26 October 2011 09:54 To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] glance and swift Hi All, Has anybody managed to configure glance to use swift? I've created a glance account and user on swift and can upload files: >swift list -A https://127.0.0.1:8080/auth/v1.0/ -U glance:glance -K glance glance_bucket virtualization-2edition.pdf Now, I'm truing to update glance config, /etc/glance/glance-api.conf default_store = swift swift_store_auth_address = https://131.227.75.25:8080/auth/ swift_store_user = glance swift_store_key=glance swift_store_container = glance_bucket and restart glance, but when I upload images into nova they are ending up in local filesystem /var/lib/glance/images instead of in swift. Any help appreciated. Kind Regards John O'Loughlin FEPS IT, Service Delivery Team Leader _______________________________________________ Openstack-operators mailing list Openstack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From betodalas at gmail.com Wed Oct 26 12:43:29 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 10:43:29 -0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: My Openstack Versions is 2012.1-dev (2012.1-LOCALBRANCH:LOCALREVISION) Is correct? Which version is more stable? 2011/10/26 Giuseppe Civitella > If imagem.vhd is a gzipped tar archive containing a file called > image.vhd, your command should work this way: > glance add name=lucid_ovf disk_format=vhd container_format=ovf > is_public=True < imagem.vhd > > > > 2011/10/26 Roberto Dalas Z. Benavides : > > Can i use the command ? > > > > add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf > > > is_public True < imagem.vhd > > > > 2011/10/26 Giuseppe Civitella > >> > >> It has to be a vhd image. > >> You can try XenConverter to get a vhd from a vmdk. > >> > >> Cheers, > >> Giuseppe > >> > >> 2011/10/26 Roberto Dalas Z. Benavides : > >> > I have an image vmdk and am doing the following: > >> > add name = glance lucid_ovf disk_format container_format vhd = = = OVF > >> > is_public True >> > > >> > Thanks > >> > > >> > 2011/10/26 Giuseppe Civitella > >> >> > >> >> Yes, the nova-compute service has to run on a domU. > >> >> You need to install XenServer's plugins on dom0 (have a look here: > >> >> http://wiki.openstack.org/XenServerDevelopment). > >> >> The domU will tell the dom0 to deploy images via xenapi. > >> >> You need to extract you vhd image, rename it image.vhd and then gzip > >> >> it. > >> >> Glance plugin on XenServer expect vhd images to be gzipped, so if you > >> >> don't compress them the deploy process will fail. > >> >> > >> >> Cheers, > >> >> Giuseppe > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> > A doubt, the new server, compute, must be within a XenServer > virtual > >> >> > machine? > >> >> > The image must actually be as gzip, or you can get on the same > Glance > >> >> > as > >> >> > vhd? > >> >> > > >> >> > 2011/10/26 Giuseppe Civitella > >> >> >> > >> >> >> Hi, > >> >> >> > >> >> >> did you check what happens on XenServer's dom0? > >> >> >> Are there some pending gzip processes? > >> >> >> Deploy of vhd images can fail if they're are not properly created. > >> >> >> You can find the rigth procedure here: > >> >> >> https://answers.launchpad.net/nova/+question/161683 > >> >> >> > >> >> >> Hope it helps > >> >> >> Giuseppe > >> >> >> > >> >> >> > >> >> >> > >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> >> > Hello, I installed Compute and New Glance a separate server. I'm > >> >> >> > trying > >> >> >> > to > >> >> >> > create VM on Xen by Dashboard. The panel is the pending status > >> >> >> > logs > >> >> >> > and > >> >> >> > shows that XenServer's picking up the image of the Glance, but > the > >> >> >> > machine > >> >> >> > is not created. Follow the log: > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > >> >> >> > R:cdbc860b307a|audit] Host.call_plugin host = > >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = > 'download_vhd'; > >> >> >> > args > >> >> >> > = [ > >> >> >> > params: (dp0 > >> >> >> > S'auth_token' > >> >> >> > p1 > >> >> >> > NsS'glance_port' > >> >> >> > p2 > >> >> >> > I9292 > >> >> >> > sS'uuid_stack' > >> >> >> > p3 > >> >> >> > (lp4 > >> >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > >> >> >> > p5 > >> >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > >> >> >> > p6 > >> >> >> > asS'image_id' > >> >> >> > p7 > >> >> >> > S'4' > >> >> >> > p8 > >> >> >> > sS'glance_host' > >> >> >> > p9 > >> >> >> > S'10.168.1.30' > >> >> >> > p10 > >> >> >> > sS'sr_path' > >> >> >> > p11 > >> >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > >> >> >> > p12 > >> >> >> > s. ] > >> >> >> > [20111024T14:18:24.251Z| > >> >> >> > info|xenserver-opstack|746637|Async.host.call_plugin > >> >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the > >> >> >> > current > >> >> >> > task > >> >> >> > (tr > >> >> >> > ackid=a043138728544674d13b8d4a8ff673f7) > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > >> >> >> > R:223f6eebc13d|audit] Host.call_plugin host = > >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = 'host_data'; > >> >> >> > args > >> >> >> > = > >> >> >> > [ ] > >> >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 > >> >> >> > unix-RPC||cli] > >> >> >> > xe > >> >> >> > host-list username=root password=null > >> >> >> > > >> >> >> > Follow the nova.conf: > >> >> >> > > >> >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf > >> >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge > >> >> >> > --logdir=/var/log/nova > >> >> >> > --state_path=/var/lib/nova > >> >> >> > --lock_path=/var/lock/nova > >> >> >> > --verbose > >> >> >> > > >> >> >> > #--libvirt_type=xen > >> >> >> > --s3_host=10.168.1.32 > >> >> >> > --rabbit_host=10.168.1.32 > >> >> >> > --cc_host=10.168.1.32 > >> >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud > >> >> >> > --fixed_range=192.168.1.0/24 > >> >> >> > --network_size=250 > >> >> >> > --ec2_api=10.168.1.32 > >> >> >> > --routing_source_ip=10.168.1.32 > >> >> >> > --verbose > >> >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > >> >> >> > --network_manager=nova.network.manager.FlatManager > >> >> >> > --glance_api_servers=10.168.1.32:9292 > >> >> >> > --image_service=nova.image.glance.GlanceImageService > >> >> >> > --flat_network_bridge=xenbr0 > >> >> >> > --connection_type=xenapi > >> >> >> > --xenapi_connection_url=https://10.168.1.31 > >> >> >> > --xenapi_connection_username=root > >> >> >> > --xenapi_connection_password=status64 > >> >> >> > --reboot_timeout=600 > >> >> >> > --rescue_timeout=86400 > >> >> >> > --resize_confirm_window=86400 > >> >> >> > --allow_resize_to_same_host > >> >> >> > > >> >> >> > New log-in information compute.log shows cpu, memory, about Xen > >> >> >> > Sevres, > >> >> >> > but > >> >> >> > does not create machines. > >> >> >> > > >> >> >> > Thanks > >> >> >> > _______________________________________________ > >> >> >> > Openstack-operators mailing list > >> >> >> > Openstack-operators at lists.openstack.org > >> >> >> > > >> >> >> > > >> >> >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> >> >> > > >> >> >> > > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.civitella at gmail.com Wed Oct 26 12:57:29 2011 From: giuseppe.civitella at gmail.com (Giuseppe Civitella) Date: Wed, 26 Oct 2011 14:57:29 +0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: I'm currently using Diablo (2011.3-nova-milestone-tarball:tarmac-20110922115702-k9nkvxqzhj130av2) and it works with XenServer 5.6 (it should with XCP 1.1 too). I did non try yet Essex, sorry. 2011/10/26 Roberto Dalas Z. Benavides : > My Openstack Versions is > > 2012.1-dev (2012.1-LOCALBRANCH:LOCALREVISION) > > Is correct? > > Which version is more stable? > > 2011/10/26 Giuseppe Civitella >> >> If ?imagem.vhd is a gzipped tar archive containing a file called >> image.vhd, your command should work this way: >> glance add name=lucid_ovf disk_format=vhd container_format=ovf >> is_public=True < imagem.vhd >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> > Can i use the command ? >> > >> > add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf > >> > is_public True < imagem.vhd >> > >> > 2011/10/26 Giuseppe Civitella >> >> >> >> It has to be a vhd image. >> >> You can try XenConverter to get a vhd from a vmdk. >> >> >> >> Cheers, >> >> Giuseppe >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> > I have an image vmdk and am doing the following: >> >> > add name = glance lucid_ovf disk_format container_format vhd = = = >> >> > OVF >> >> > is_public True > >> > >> >> > Thanks >> >> > >> >> > 2011/10/26 Giuseppe Civitella >> >> >> >> >> >> Yes, the nova-compute service has to run on a domU. >> >> >> You need to install XenServer's plugins on dom0 (have a look here: >> >> >> http://wiki.openstack.org/XenServerDevelopment). >> >> >> The domU will tell the dom0 to deploy images via xenapi. >> >> >> You need to extract you vhd image, rename it image.vhd and then gzip >> >> >> it. >> >> >> Glance plugin on XenServer expect vhd images to be gzipped, so if >> >> >> you >> >> >> don't compress them the deploy process will fail. >> >> >> >> >> >> Cheers, >> >> >> Giuseppe >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> >> > A doubt, the new server, compute, must be within a XenServer >> >> >> > virtual >> >> >> > machine? >> >> >> > The image must actually be as gzip, or you can get on the same >> >> >> > Glance >> >> >> > as >> >> >> > vhd? >> >> >> > >> >> >> > 2011/10/26 Giuseppe Civitella >> >> >> >> >> >> >> >> Hi, >> >> >> >> >> >> >> >> did you check what happens on XenServer's dom0? >> >> >> >> Are there some pending gzip processes? >> >> >> >> Deploy of vhd images can fail if they're are not properly >> >> >> >> created. >> >> >> >> You can find the rigth procedure here: >> >> >> >> https://answers.launchpad.net/nova/+question/161683 >> >> >> >> >> >> >> >> Hope it helps >> >> >> >> Giuseppe >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : >> >> >> >> > Hello, I installed Compute and New Glance a separate server. >> >> >> >> > I'm >> >> >> >> > trying >> >> >> >> > to >> >> >> >> > create VM on Xen by Dashboard. The panel is the pending status >> >> >> >> > logs >> >> >> >> > and >> >> >> >> > shows that XenServer's picking up the image of the Glance, but >> >> >> >> > the >> >> >> >> > machine >> >> >> >> > is not created. Follow the log: >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin >> >> >> >> > R:cdbc860b307a|audit] Host.call_plugin host = >> >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = >> >> >> >> > 'download_vhd'; >> >> >> >> > args >> >> >> >> > = [ >> >> >> >> > params: (dp0 >> >> >> >> > S'auth_token' >> >> >> >> > p1 >> >> >> >> > NsS'glance_port' >> >> >> >> > p2 >> >> >> >> > I9292 >> >> >> >> > sS'uuid_stack' >> >> >> >> > p3 >> >> >> >> > (lp4 >> >> >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' >> >> >> >> > p5 >> >> >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' >> >> >> >> > p6 >> >> >> >> > asS'image_id' >> >> >> >> > p7 >> >> >> >> > S'4' >> >> >> >> > p8 >> >> >> >> > sS'glance_host' >> >> >> >> > p9 >> >> >> >> > S'10.168.1.30' >> >> >> >> > p10 >> >> >> >> > sS'sr_path' >> >> >> >> > p11 >> >> >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' >> >> >> >> > p12 >> >> >> >> > s. ] >> >> >> >> > [20111024T14:18:24.251Z| >> >> >> >> > info|xenserver-opstack|746637|Async.host.call_plugin >> >> >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle the >> >> >> >> > current >> >> >> >> > task >> >> >> >> > (tr >> >> >> >> > ackid=a043138728544674d13b8d4a8ff673f7) >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin >> >> >> >> > R:223f6eebc13d|audit] Host.call_plugin host = >> >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 >> >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = >> >> >> >> > 'host_data'; >> >> >> >> > args >> >> >> >> > = >> >> >> >> > [ ] >> >> >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 >> >> >> >> > unix-RPC||cli] >> >> >> >> > xe >> >> >> >> > host-list username=root password=null >> >> >> >> > >> >> >> >> > Follow the nova.conf: >> >> >> >> > >> >> >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf >> >> >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge >> >> >> >> > --logdir=/var/log/nova >> >> >> >> > --state_path=/var/lib/nova >> >> >> >> > --lock_path=/var/lock/nova >> >> >> >> > --verbose >> >> >> >> > >> >> >> >> > #--libvirt_type=xen >> >> >> >> > --s3_host=10.168.1.32 >> >> >> >> > --rabbit_host=10.168.1.32 >> >> >> >> > --cc_host=10.168.1.32 >> >> >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud >> >> >> >> > --fixed_range=192.168.1.0/24 >> >> >> >> > --network_size=250 >> >> >> >> > --ec2_api=10.168.1.32 >> >> >> >> > --routing_source_ip=10.168.1.32 >> >> >> >> > --verbose >> >> >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova >> >> >> >> > --network_manager=nova.network.manager.FlatManager >> >> >> >> > --glance_api_servers=10.168.1.32:9292 >> >> >> >> > --image_service=nova.image.glance.GlanceImageService >> >> >> >> > --flat_network_bridge=xenbr0 >> >> >> >> > --connection_type=xenapi >> >> >> >> > --xenapi_connection_url=https://10.168.1.31 >> >> >> >> > --xenapi_connection_username=root >> >> >> >> > --xenapi_connection_password=status64 >> >> >> >> > --reboot_timeout=600 >> >> >> >> > --rescue_timeout=86400 >> >> >> >> > --resize_confirm_window=86400 >> >> >> >> > --allow_resize_to_same_host >> >> >> >> > >> >> >> >> > New log-in information compute.log shows cpu, memory, about Xen >> >> >> >> > Sevres, >> >> >> >> > but >> >> >> >> > does not create machines. >> >> >> >> > >> >> >> >> > Thanks >> >> >> >> > _______________________________________________ >> >> >> >> > Openstack-operators mailing list >> >> >> >> > Openstack-operators at lists.openstack.org >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> >> > >> >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> > >> > > > From J.O'Loughlin at surrey.ac.uk Wed Oct 26 13:16:49 2011 From: J.O'Loughlin at surrey.ac.uk (J.O'Loughlin at surrey.ac.uk) Date: Wed, 26 Oct 2011 14:16:49 +0100 Subject: [Openstack-operators] Roles Message-ID: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9CAB@EXMB01CMS.surrey.ac.uk> Hi All, I'm running trunk on 10.10 I've just created a user and added to a project: nova-manage user create tom nova-project add project2 tom at this stage no roles added: my understanding is that a euca-describe-images should just show images in project? the new user can see all images, all instances in all projects, can start an instance from any image even if marked private, can allocate themselves an address and can then assign that to any other user instances! After the above I gave tom the sysadmin role (global and then in the project). Makes no difference to what they can and cant do. Is this normal behaviour? Regards John O'Loughlin FEPS IT, Service Delivery Team Leader From renato at dualtec.com.br Wed Oct 26 14:25:40 2011 From: renato at dualtec.com.br (Renato Serra Armani) Date: Wed, 26 Oct 2011 14:25:40 +0000 Subject: [Openstack-operators] GIT Version Message-ID: <5669DADFECCDF4468DA742C275FB64FF24B69852@DUALTEC-EXC-1A.dualtec.local> Hi everyone It is my first e-mail. My name is Renato S. Armani and I'm from Brazil. Since august with some folks from python brazilian comunity, and from other companies and initiatives here in Brazil we started to test openstack. Soon as possible we will get more know-how and I'll be glad to contribute highly with the Openstack community. My first question is: Today we are trying to install Openstack over Ubuntu using the scritpt from the Openstack Manual (git clone git://github.com/cloudbuilders/devstack.git...) My question is about the version of this git file. I checked out the version using "sudo nova-compute version list" and the displayed version is the "2012.1" researching on the web I understood that this version is related to the ESSEX release and not releated to the Diablo release that supposed to be 2011.3. I'm a little confused about it because I'm not understanding why is not the DIABLO release instead the ESSEX in the official script? I'll appreciate if anyone can explain this for me? Best Regards, Renato S. Armani -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Wed Oct 26 19:04:11 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 17:04:11 -0200 Subject: [Openstack-operators] OpenStack With KVM Message-ID: Hello, I installed a server with kvm and would like to know how to have the talk with this kvm OpenStack. What should I put in nova.conf? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From islamsh at indiana.edu Wed Oct 26 19:11:39 2011 From: islamsh at indiana.edu (Sharif Islam) Date: Wed, 26 Oct 2011 15:11:39 -0400 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: References: Message-ID: <4EA85B6B.5050201@indiana.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/26/2011 03:04 PM, Roberto Dalas Z. Benavides wrote: > Hello, I installed a server with kvm and would like to know how to have > the talk with this kvm OpenStack. What should I put in nova.conf? > > Thanks > - --libvirt_type=kvm should do the trick. - --sharif - -- Sharif Islam Senior Systems Analyst/Programmer FutureGrid (http://futuregrid.org) Pervasive Technology Institute, Indiana University Bloomington -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJOqFtrAAoJEACffes9SivFGNcIAKeHf3hBAMcWQinUN9dRvX4d wCG/mJhlpHB90rFPfvV6b2vDZQ2K0M62TAy6PfOoq5K4z+2oHHBQ4vNtUQCdZOM5 NCvXPsQTgEwRg6DCKM/obru8P8hQ4rqlTyF3AAretattzUuNbrCj7hOR1IrlqrlC qL+MN9Zv2BXTApiHyL7KMsJvK1b9MhD8Ww0oMlwKL7GXQzNn4JtDiCIKz0A1Louc HduNjw1aGuWGzWJ4ApOTLX1HBXPvfnJNlF9HLX8XsEF4/36bp4zEZNYmbqcHs80K 6mSSo0aOc31Jy/bicZX75t1dp0qt5sKQi0vTxd6E1mFYUdsMacx9rAOCwHpZxxs= =+wZA -----END PGP SIGNATURE----- From betodalas at gmail.com Wed Oct 26 19:19:08 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 17:19:08 -0200 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: <4EA85B6B.5050201@indiana.edu> References: <4EA85B6B.5050201@indiana.edu> Message-ID: But how will you know which server it should connect? and where it asks for a username and password? 2011/10/26 Sharif Islam > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 10/26/2011 03:04 PM, Roberto Dalas Z. Benavides wrote: > > Hello, I installed a server with kvm and would like to know how to have > > the talk with this kvm OpenStack. What should I put in nova.conf? > > > > Thanks > > > > - --libvirt_type=kvm > > should do the trick. > > - --sharif > > > - -- > Sharif Islam > Senior Systems Analyst/Programmer > FutureGrid (http://futuregrid.org) > Pervasive Technology Institute, Indiana University Bloomington > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJOqFtrAAoJEACffes9SivFGNcIAKeHf3hBAMcWQinUN9dRvX4d > wCG/mJhlpHB90rFPfvV6b2vDZQ2K0M62TAy6PfOoq5K4z+2oHHBQ4vNtUQCdZOM5 > NCvXPsQTgEwRg6DCKM/obru8P8hQ4rqlTyF3AAretattzUuNbrCj7hOR1IrlqrlC > qL+MN9Zv2BXTApiHyL7KMsJvK1b9MhD8Ww0oMlwKL7GXQzNn4JtDiCIKz0A1Louc > HduNjw1aGuWGzWJ4ApOTLX1HBXPvfnJNlF9HLX8XsEF4/36bp4zEZNYmbqcHs80K > 6mSSo0aOc31Jy/bicZX75t1dp0qt5sKQi0vTxd6E1mFYUdsMacx9rAOCwHpZxxs= > =+wZA > -----END PGP SIGNATURE----- > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From islamsh at indiana.edu Wed Oct 26 19:25:01 2011 From: islamsh at indiana.edu (Sharif Islam) Date: Wed, 26 Oct 2011 15:25:01 -0400 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: References: <4EA85B6B.5050201@indiana.edu> Message-ID: <4EA85E8D.5010205@indiana.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/26/2011 03:19 PM, Roberto Dalas Z. Benavides wrote: > But how will you know which server it should connect? and where it asks > for a username and password? How many servers do you have? If you have only one server then you will need to install all the nova services there -- nova-compute, nova-network etc. Otherwise, install a controller node with nova-network and nova-scheduler. And rest of the servers will only have nova-compute. And each of these compute nodes will have a nova.conf where you will define these flags: - --ec2_url=http://your_nova_controller_server_ip:8773/services/Cloud - --s3_host=your_nova_controller_server_ip - --cc_host=your_nova_controller_server_ip - --rabbit_host=your_nova_controller_server_ip - --network_host=your_nova_controller_server_ip I suggest your read the doc carefully, if you haven't already: http://docs.openstack.org/ And regarding password, usually VMs are booted up using ssh key so it won't need a password. - --sharif > > 2011/10/26 Sharif Islam > > > On 10/26/2011 03:04 PM, Roberto Dalas Z. Benavides wrote: >> Hello, I installed a server with kvm and would like to know how to > have >> the talk with this kvm OpenStack. What should I put in nova.conf? > >> Thanks > > > --libvirt_type=kvm > > should do the trick. > > --sharif > > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJOqF6NAAoJEACffes9SivF3p4H/jn7buudugiOZCx7wKqYop15 qPwwaEHITm7mz879BOxNWOHnMFexfjPn5dNebK+9+4WJTJTB6Wo5YddNxYKbytHa vuU9e3n9p8GHBO3UHdCvUbr9CPKCGUreMQeHpsVia37Y4rul+JD78jtGg1vl+P+N 6yPBrnsW5N2lAbhMMKFKp8tjErDGXa27dg0W5omnyKQ0puimysyyXspX63/HRSbO Bm3H/IQneQNtxK1QyQuGnsv7PYpOPVhWaTSqSe4kFw9+a3OYwUvYrByt6BH87wWI cOBCRJHL40hoXvo34fSm4qzi5Bv/KVJn90p+wPnLFIJYj4JUpQBXq27SpN1fecQ= =v8y1 -----END PGP SIGNATURE----- From betodalas at gmail.com Wed Oct 26 19:30:45 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 17:30:45 -0200 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: <4EA85E8D.5010205@indiana.edu> References: <4EA85B6B.5050201@indiana.edu> <4EA85E8D.5010205@indiana.edu> Message-ID: I have a server with all the nova services it and on another server I have installed kvm. As the new will know what he kvm server will create the machine in? For example: I use the vmware vmwareapi User data information and password. But in kvm? 2011/10/26 Sharif Islam > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 10/26/2011 03:19 PM, Roberto Dalas Z. Benavides wrote: > > > But how will you know which server it should connect? and where it asks > > for a username and password? > > > How many servers do you have? > > If you have only one server then you will need to install all the nova > services there -- nova-compute, nova-network etc. Otherwise, install a > controller node with nova-network and nova-scheduler. And rest of the > servers will only have nova-compute. > > And each of these compute nodes will have a nova.conf where you will > define these flags: > > - --ec2_url=http://your_nova_controller_server_ip:8773/services/Cloud > - --s3_host=your_nova_controller_server_ip > - --cc_host=your_nova_controller_server_ip > - --rabbit_host=your_nova_controller_server_ip > - --network_host=your_nova_controller_server_ip > > I suggest your read the doc carefully, if you haven't already: > http://docs.openstack.org/ > > And regarding password, usually VMs are booted up using ssh key so it > won't need a password. > > - --sharif > > > > > > > 2011/10/26 Sharif Islam >> > > > > On 10/26/2011 03:04 PM, Roberto Dalas Z. Benavides wrote: > >> Hello, I installed a server with kvm and would like to know how to > > have > >> the talk with this kvm OpenStack. What should I put in nova.conf? > > > >> Thanks > > > > > > --libvirt_type=kvm > > > > should do the trick. > > > > --sharif > > > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJOqF6NAAoJEACffes9SivF3p4H/jn7buudugiOZCx7wKqYop15 > qPwwaEHITm7mz879BOxNWOHnMFexfjPn5dNebK+9+4WJTJTB6Wo5YddNxYKbytHa > vuU9e3n9p8GHBO3UHdCvUbr9CPKCGUreMQeHpsVia37Y4rul+JD78jtGg1vl+P+N > 6yPBrnsW5N2lAbhMMKFKp8tjErDGXa27dg0W5omnyKQ0puimysyyXspX63/HRSbO > Bm3H/IQneQNtxK1QyQuGnsv7PYpOPVhWaTSqSe4kFw9+a3OYwUvYrByt6BH87wWI > cOBCRJHL40hoXvo34fSm4qzi5Bv/KVJn90p+wPnLFIJYj4JUpQBXq27SpN1fecQ= > =v8y1 > -----END PGP SIGNATURE----- > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From islamsh at indiana.edu Wed Oct 26 19:43:40 2011 From: islamsh at indiana.edu (Sharif Islam) Date: Wed, 26 Oct 2011 15:43:40 -0400 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: References: <4EA85B6B.5050201@indiana.edu> <4EA85E8D.5010205@indiana.edu> Message-ID: <4EA862EC.5070201@indiana.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/26/2011 03:30 PM, Roberto Dalas Z. Benavides wrote: > I have a server with all the novaservices it and on another server I > have installed kvm. > As the new will know what he kvm server will create the machine in? Ok. the server you have kvm, you will need to install nova-compute and in nova.conf file add --libvirt_type=kvm along with the other options. This way nova services will know which server to use. > For example: I use the vmware vmwareapi User data information and > password. But in kvm? I think this will depend how you create your images. You can add a local user in the image or as I mentioned before use a ssh key which will be injected by nova as it boots up. - --sharif -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJOqGLsAAoJEACffes9SivFZMQIALchlwetKfSF5NIT4P2EzK2d Gp7MTDLm77ATJ2XC2bhdZiHR64wdyC6ehjmHoyl5JBHcQWP6cECFuS93Yc1D8cc1 kmTKSNXtKxvn0eKxCPyARohIaJO2rXMHGEhZTr5amOx31uuebbAVpU+ONJkaw6zP nlNvNwqfxAefHicD3jMYY+PSrQWSRDy6oxWHh5ctNDtVF0b7o3jjY7D+RzhO2gNi dUuBHqsQQTiqmp5bRFQ0uh+nvPFTFEqazzpbS4uMRWRTXi2PVjWZLoBMZU9+Tl7g aRbpmBOdebhsaqsvYI2vKqzR5kXRdrulRpZUGUxHIEZW6XfItvBHnVEegxyLH8g= =aJjg -----END PGP SIGNATURE----- From betodalas at gmail.com Wed Oct 26 20:49:31 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Wed, 26 Oct 2011 18:49:31 -0200 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: <4EA862EC.5070201@indiana.edu> References: <4EA85B6B.5050201@indiana.edu> <4EA85E8D.5010205@indiana.edu> <4EA862EC.5070201@indiana.edu> Message-ID: I installed kvm and nova computer in same machine, but occurred a error: 2011-10-26 18:47:05,936 ERROR nova.exception [-] Uncaught exception (nova.exception): TRACE: Traceback (most recent call last): (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/nova/exception .py", line 98, in wrapped (nova.exception): TRACE: return f(*args, **kw) (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/libv irt/connection.py", line 673, in spawn (nova.exception): TRACE: self.firewall_driver.setup_basic_filtering(instance , network_info) (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/libv irt/firewall.py", line 525, in setup_basic_filtering (nova.exception): TRACE: self.refresh_provider_fw_rules() (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/libv irt/firewall.py", line 737, in refresh_provider_fw_rules (nova.exception): TRACE: self._do_refresh_provider_fw_rules() (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/nova/utils.py" , line 687, in inner (nova.exception): TRACE: with lock: (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/lockfile.py", line 223, in __enter__ (nova.exception): TRACE: self.acquire() (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/lockfile.py", line 239, in acquire (nova.exception): TRACE: raise LockFailed("failed to create %s" % self.uniqu e_name) (nova.exception): TRACE: LockFailed: failed to create /usr/lib/python2.7/dist-pa ckages/OPSTACK-CTR-02.Dummy-9-23053 (nova.exception): TRACE: 2011-10-26 18:47:05,937 ERROR nova.compute.manager [-] Instance '10' failed to s pawn. Is virtualization enabled in the BIOS? Details: failed to create /usr/lib/ python2.7/dist-packages/OPSTACK-CTR-02.Dummy-9-23053 (nova.compute.manager): TRACE: Traceback (most recent call last): (nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/com pute/manager.py", line 424, in _run_instance (nova.compute.manager): TRACE: network_info, block_device_info) (nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/exc eption.py", line 129, in wrapped (nova.compute.manager): TRACE: raise Error(str(e)) (nova.compute.manager): TRACE: Error: failed to create /usr/lib/python2.7/dist-p ackages/OPSTACK-CTR-02.Dummy-9-23053 (nova.compute.manager): TRACE: 2011/10/26 Sharif Islam > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 10/26/2011 03:30 PM, Roberto Dalas Z. Benavides wrote: > > I have a server with all the novaservices it and on another server I > > have installed kvm. > > As the new will know what he kvm server will create the machine in? > > > Ok. the server you have kvm, you will need to install nova-compute and > in nova.conf file add --libvirt_type=kvm along with the other options. > This way nova services will know which server to use. > > > > For example: I use the vmware vmwareapi User data information and > > password. But in kvm? > > I think this will depend how you create your images. You can add a local > user in the image or as I mentioned before use a ssh key which will be > injected by nova as it boots up. > > - --sharif > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJOqGLsAAoJEACffes9SivFZMQIALchlwetKfSF5NIT4P2EzK2d > Gp7MTDLm77ATJ2XC2bhdZiHR64wdyC6ehjmHoyl5JBHcQWP6cECFuS93Yc1D8cc1 > kmTKSNXtKxvn0eKxCPyARohIaJO2rXMHGEhZTr5amOx31uuebbAVpU+ONJkaw6zP > nlNvNwqfxAefHicD3jMYY+PSrQWSRDy6oxWHh5ctNDtVF0b7o3jjY7D+RzhO2gNi > dUuBHqsQQTiqmp5bRFQ0uh+nvPFTFEqazzpbS4uMRWRTXi2PVjWZLoBMZU9+Tl7g > aRbpmBOdebhsaqsvYI2vKqzR5kXRdrulRpZUGUxHIEZW6XfItvBHnVEegxyLH8g= > =aJjg > -----END PGP SIGNATURE----- > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlan at bloomenterprises.org Thu Oct 27 01:02:57 2011 From: harlan at bloomenterprises.org (Harlan H. Bloom) Date: Wed, 26 Oct 2011 20:02:57 -0500 (CDT) Subject: [Openstack-operators] Installing dashboard - can't find Python.h In-Reply-To: <041b297c-eff9-4a35-b13e-26f4669a3764@starx2> Message-ID: Hello, I'm this is probably a newbie question, but I haven't been able to find an answer, in English anyways, for this error: Installing collected packages: xattr, pep8, pylint, coverage, glance, quantum, openstack, openstackx, python-novaclient, anyjson, amqplib, decorator, Tempita, greenlet, logilab-common, logilab-astng, httplib2, argparse, prettytable Running setup.py install for xattr building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 Complete output from command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7: running install running build running build_py running build_ext building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7 failed with error code 1 Storing complete log in /home/harlan/.pip/pip.log Command "/home/harlan/horizon/openstack-dashboard/tools/with_venv.sh pip install -E /home/harlan/horizon/openstack-dashboard/.dashboard-venv -r /home/harlan/horizon/openstack-dashboard/tools/pip-requires" failed. None I'm installing this on Ubuntu Server 11.10. Any ideas or suggestions? Please let me know if you need any more information. Thanks, Harlan... -------------- next part -------------- An HTML attachment was scrubbed... URL: From sateesh.chodapuneedi at citrix.com Thu Oct 27 02:01:38 2011 From: sateesh.chodapuneedi at citrix.com (Sateesh Chodapuneedi) Date: Thu, 27 Oct 2011 07:31:38 +0530 Subject: [Openstack-operators] Installing dashboard - can't find Python.h In-Reply-To: References: <041b297c-eff9-4a35-b13e-26f4669a3764@starx2> Message-ID: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C053@BANPMAILBOX01.citrite.net> Hi Harlan, You need to install libxml2 libxslt-dev. In Ubuntu, you can try apt-get install libxml2 libxslt-dev. Regards, Sateesh ---------------------------------------------------------------------------------------------------------------------------- "This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure, or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message." From: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] On Behalf Of Harlan H. Bloom Sent: Thursday, October 27, 2011 6:33 AM To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] Installing dashboard - can't find Python.h Hello, I'm this is probably a newbie question, but I haven't been able to find an answer, in English anyways, for this error: Installing collected packages: xattr, pep8, pylint, coverage, glance, quantum, openstack, openstackx, python-novaclient, anyjson, amqplib, decorator, Tempita, greenlet, logilab-common, logilab-astng, httplib2, argparse, prettytable Running setup.py install for xattr building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 Complete output from command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7: running install running build running build_py running build_ext building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7 failed with error code 1 Storing complete log in /home/harlan/.pip/pip.log Command "/home/harlan/horizon/openstack-dashboard/tools/with_venv.sh pip install -E /home/harlan/horizon/openstack-dashboard/.dashboard-venv -r /home/harlan/horizon/openstack-dashboard/tools/pip-requires" failed. None I'm installing this on Ubuntu Server 11.10. Any ideas or suggestions? Please let me know if you need any more information. Thanks, Harlan... -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: image001.gif URL: From harlan at bloomenterprises.org Thu Oct 27 02:18:41 2011 From: harlan at bloomenterprises.org (Harlan H. Bloom) Date: Wed, 26 Oct 2011 21:18:41 -0500 (CDT) Subject: [Openstack-operators] Installing dashboard - can't find Python.h In-Reply-To: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C053@BANPMAILBOX01.citrite.net> Message-ID: <52683164-39d6-44a7-b142-49bc4a9b2d53@starx2> Hi Sateesh, Deadsun suggested installing the python-dev package. And that worked. I was able to get the login page, but now I'm trying to figure out how to actually login to the dashboard. Thanks, Harlan... ----- Original Message ----- From: "Sateesh Chodapuneedi" To: "Harlan H. Bloom" , openstack-operators at lists.openstack.org Sent: Wednesday, October 26, 2011 9:01:38 PM Subject: RE: [Openstack-operators] Installing dashboard - can't find Python.h Hi Harlan, You need to install libxml2 libxslt-dev. In Ubuntu, you can try apt-get install libxml2 libxslt-dev. Regards, Sateesh ---------------------------------------------------------------------------------------------------------------------------- "This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure, or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message." Description: http://www6.integrityatwork.net/integrity/courses/ic1c/ic1cSTD/ic1cSTD_templates/shim.gif From: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] On Behalf Of Harlan H. Bloom Sent: Thursday, October 27, 2011 6:33 AM To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] Installing dashboard - can't find Python.h Hello, I'm this is probably a newbie question, but I haven't been able to find an answer, in English anyways, for this error: Installing collected packages: xattr, pep8, pylint, coverage, glance, quantum, openstack, openstackx, python-novaclient, anyjson, amqplib, decorator, Tempita, greenlet, logilab-common, logilab-astng, httplib2, argparse, prettytable Running setup.py install for xattr building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 Complete output from command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7: running install running build running build_py running build_ext building 'xattr._xattr' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c xattr/_xattr.c -o build/temp.linux-x86_64-2.7/xattr/_xattr.o xattr/_xattr.c:1:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /home/harlan/horizon/openstack-dashboard/.dashboard-venv/bin/python -c "import setuptools;__file__='/home/harlan/horizon/openstack-dashboard/.dashboard-venv/build/xattr/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-04M8XA-record/install-record.txt --install-headers /home/harlan/horizon/openstack-dashboard/.dashboard-venv/include/site/python2.7 failed with error code 1 Storing complete log in /home/harlan/.pip/pip.log Command "/home/harlan/horizon/openstack-dashboard/tools/with_venv.sh pip install -E /home/harlan/horizon/openstack-dashboard/.dashboard-venv -r /home/harlan/horizon/openstack-dashboard/tools/pip-requires" failed. None I'm installing this on Ubuntu Server 11.10. Any ideas or suggestions? Please let me know if you need any more information. Thanks, Harlan... -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: image001.gif URL: From betodalas at gmail.com Thu Oct 27 08:22:17 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 06:22:17 -0200 Subject: [Openstack-operators] Xen With Openstack In-Reply-To: References: Message-ID: Thanks Giuseppe. I will try it. 2011/10/26 Giuseppe Civitella > I'm currently using Diablo > (2011.3-nova-milestone-tarball:tarmac-20110922115702-k9nkvxqzhj130av2) > and it works with XenServer 5.6 (it should with XCP 1.1 too). > I did non try yet Essex, sorry. > > > > 2011/10/26 Roberto Dalas Z. Benavides : > > My Openstack Versions is > > > > 2012.1-dev (2012.1-LOCALBRANCH:LOCALREVISION) > > > > Is correct? > > > > Which version is more stable? > > > > 2011/10/26 Giuseppe Civitella > >> > >> If imagem.vhd is a gzipped tar archive containing a file called > >> image.vhd, your command should work this way: > >> glance add name=lucid_ovf disk_format=vhd container_format=ovf > >> is_public=True < imagem.vhd > >> > >> > >> > >> 2011/10/26 Roberto Dalas Z. Benavides : > >> > Can i use the command ? > >> > > >> > add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf > > > >> > is_public True < imagem.vhd > >> > > >> > 2011/10/26 Giuseppe Civitella > >> >> > >> >> It has to be a vhd image. > >> >> You can try XenConverter to get a vhd from a vmdk. > >> >> > >> >> Cheers, > >> >> Giuseppe > >> >> > >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> > I have an image vmdk and am doing the following: > >> >> > add name = glance lucid_ovf disk_format container_format vhd = = = > >> >> > OVF > >> >> > is_public True image? > >> >> > > >> >> > Thanks > >> >> > > >> >> > 2011/10/26 Giuseppe Civitella > >> >> >> > >> >> >> Yes, the nova-compute service has to run on a domU. > >> >> >> You need to install XenServer's plugins on dom0 (have a look here: > >> >> >> http://wiki.openstack.org/XenServerDevelopment). > >> >> >> The domU will tell the dom0 to deploy images via xenapi. > >> >> >> You need to extract you vhd image, rename it image.vhd and then > gzip > >> >> >> it. > >> >> >> Glance plugin on XenServer expect vhd images to be gzipped, so if > >> >> >> you > >> >> >> don't compress them the deploy process will fail. > >> >> >> > >> >> >> Cheers, > >> >> >> Giuseppe > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> >> > A doubt, the new server, compute, must be within a XenServer > >> >> >> > virtual > >> >> >> > machine? > >> >> >> > The image must actually be as gzip, or you can get on the same > >> >> >> > Glance > >> >> >> > as > >> >> >> > vhd? > >> >> >> > > >> >> >> > 2011/10/26 Giuseppe Civitella > >> >> >> >> > >> >> >> >> Hi, > >> >> >> >> > >> >> >> >> did you check what happens on XenServer's dom0? > >> >> >> >> Are there some pending gzip processes? > >> >> >> >> Deploy of vhd images can fail if they're are not properly > >> >> >> >> created. > >> >> >> >> You can find the rigth procedure here: > >> >> >> >> https://answers.launchpad.net/nova/+question/161683 > >> >> >> >> > >> >> >> >> Hope it helps > >> >> >> >> Giuseppe > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides : > >> >> >> >> > Hello, I installed Compute and New Glance a separate server. > >> >> >> >> > I'm > >> >> >> >> > trying > >> >> >> >> > to > >> >> >> >> > create VM on Xen by Dashboard. The panel is the pending > status > >> >> >> >> > logs > >> >> >> >> > and > >> >> >> >> > shows that XenServer's picking up the image of the Glance, > but > >> >> >> >> > the > >> >> >> >> > machine > >> >> >> >> > is not created. Follow the log: > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin > >> >> >> >> > R:cdbc860b307a|audit] Host.call_plugin host = > >> >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn = > >> >> >> >> > 'download_vhd'; > >> >> >> >> > args > >> >> >> >> > = [ > >> >> >> >> > params: (dp0 > >> >> >> >> > S'auth_token' > >> >> >> >> > p1 > >> >> >> >> > NsS'glance_port' > >> >> >> >> > p2 > >> >> >> >> > I9292 > >> >> >> >> > sS'uuid_stack' > >> >> >> >> > p3 > >> >> >> >> > (lp4 > >> >> >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241' > >> >> >> >> > p5 > >> >> >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3' > >> >> >> >> > p6 > >> >> >> >> > asS'image_id' > >> >> >> >> > p7 > >> >> >> >> > S'4' > >> >> >> >> > p8 > >> >> >> >> > sS'glance_host' > >> >> >> >> > p9 > >> >> >> >> > S'10.168.1.30' > >> >> >> >> > p10 > >> >> >> >> > sS'sr_path' > >> >> >> >> > p11 > >> >> >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667' > >> >> >> >> > p12 > >> >> >> >> > s. ] > >> >> >> >> > [20111024T14:18:24.251Z| > >> >> >> >> > info|xenserver-opstack|746637|Async.host.call_plugin > >> >> >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle > the > >> >> >> >> > current > >> >> >> >> > task > >> >> >> >> > (tr > >> >> >> >> > ackid=a043138728544674d13b8d4a8ff673f7) > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin > >> >> >> >> > R:223f6eebc13d|audit] Host.call_plugin host = > >> >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3 > >> >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn = > >> >> >> >> > 'host_data'; > >> >> >> >> > args > >> >> >> >> > = > >> >> >> >> > [ ] > >> >> >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640 > >> >> >> >> > unix-RPC||cli] > >> >> >> >> > xe > >> >> >> >> > host-list username=root password=null > >> >> >> >> > > >> >> >> >> > Follow the nova.conf: > >> >> >> >> > > >> >> >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf > >> >> >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge > >> >> >> >> > --logdir=/var/log/nova > >> >> >> >> > --state_path=/var/lib/nova > >> >> >> >> > --lock_path=/var/lock/nova > >> >> >> >> > --verbose > >> >> >> >> > > >> >> >> >> > #--libvirt_type=xen > >> >> >> >> > --s3_host=10.168.1.32 > >> >> >> >> > --rabbit_host=10.168.1.32 > >> >> >> >> > --cc_host=10.168.1.32 > >> >> >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud > >> >> >> >> > --fixed_range=192.168.1.0/24 > >> >> >> >> > --network_size=250 > >> >> >> >> > --ec2_api=10.168.1.32 > >> >> >> >> > --routing_source_ip=10.168.1.32 > >> >> >> >> > --verbose > >> >> >> >> > --sql_connection=mysql://root:status64 at 10.168.1.32/nova > >> >> >> >> > --network_manager=nova.network.manager.FlatManager > >> >> >> >> > --glance_api_servers=10.168.1.32:9292 > >> >> >> >> > --image_service=nova.image.glance.GlanceImageService > >> >> >> >> > --flat_network_bridge=xenbr0 > >> >> >> >> > --connection_type=xenapi > >> >> >> >> > --xenapi_connection_url=https://10.168.1.31 > >> >> >> >> > --xenapi_connection_username=root > >> >> >> >> > --xenapi_connection_password=status64 > >> >> >> >> > --reboot_timeout=600 > >> >> >> >> > --rescue_timeout=86400 > >> >> >> >> > --resize_confirm_window=86400 > >> >> >> >> > --allow_resize_to_same_host > >> >> >> >> > > >> >> >> >> > New log-in information compute.log shows cpu, memory, about > Xen > >> >> >> >> > Sevres, > >> >> >> >> > but > >> >> >> >> > does not create machines. > >> >> >> >> > > >> >> >> >> > Thanks > >> >> >> >> > _______________________________________________ > >> >> >> >> > Openstack-operators mailing list > >> >> >> >> > Openstack-operators at lists.openstack.org > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> >> >> >> > > >> >> >> >> > > >> >> >> > > >> >> >> > > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Thu Oct 27 08:25:05 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 06:25:05 -0200 Subject: [Openstack-operators] OpenStack With KVM In-Reply-To: <4EA862EC.5070201@indiana.edu> References: <4EA85B6B.5050201@indiana.edu> <4EA85E8D.5010205@indiana.edu> <4EA862EC.5070201@indiana.edu> Message-ID: Thanks Sharif. I got 2011/10/26 Sharif Islam > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 10/26/2011 03:30 PM, Roberto Dalas Z. Benavides wrote: > > I have a server with all the novaservices it and on another server I > > have installed kvm. > > As the new will know what he kvm server will create the machine in? > > > Ok. the server you have kvm, you will need to install nova-compute and > in nova.conf file add --libvirt_type=kvm along with the other options. > This way nova services will know which server to use. > > > > For example: I use the vmware vmwareapi User data information and > > password. But in kvm? > > I think this will depend how you create your images. You can add a local > user in the image or as I mentioned before use a ssh key which will be > injected by nova as it boots up. > > - --sharif > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJOqGLsAAoJEACffes9SivFZMQIALchlwetKfSF5NIT4P2EzK2d > Gp7MTDLm77ATJ2XC2bhdZiHR64wdyC6ehjmHoyl5JBHcQWP6cECFuS93Yc1D8cc1 > kmTKSNXtKxvn0eKxCPyARohIaJO2rXMHGEhZTr5amOx31uuebbAVpU+ONJkaw6zP > nlNvNwqfxAefHicD3jMYY+PSrQWSRDy6oxWHh5ctNDtVF0b7o3jjY7D+RzhO2gNi > dUuBHqsQQTiqmp5bRFQ0uh+nvPFTFEqazzpbS4uMRWRTXi2PVjWZLoBMZU9+Tl7g > aRbpmBOdebhsaqsvYI2vKqzR5kXRdrulRpZUGUxHIEZW6XfItvBHnVEegxyLH8g= > =aJjg > -----END PGP SIGNATURE----- > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Thu Oct 27 08:26:27 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 06:26:27 -0200 Subject: [Openstack-operators] nova-vnc proxy Message-ID: Hello, I could not start The New vncproxy in the error log shows: new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in (new): TRACE: host = FLAGS.vncproxy_host) (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", line 116, in start_tcp (new): TRACE: eventlet.listen socket = ((host, port), backlog = backlog) (new): TRACE: File "/ usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in listen (new): TRACE: sock.bind (addr) (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth (new): TRACE: return getattr (self._sock, name) (* args) (new): TRACE: error: [Errno 13] Permission denied what can be? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Thu Oct 27 08:55:55 2011 From: hyangii at gmail.com (Jae Sang Lee) Date: Thu, 27 Oct 2011 17:55:55 +0900 Subject: [Openstack-operators] nova-vnc proxy In-Reply-To: References: Message-ID: Hi, Maybe.. $ sudo start nova-vncproxy <--- this command was run by user 'nova' so, Try run nova-vncproxy by root. # nova-vncproxy & 2011/10/27 Roberto Dalas Z. Benavides > Hello, I could not start The New vncproxy in the error log shows: > > new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in > (new): TRACE: host = FLAGS.vncproxy_host) > (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", line > 116, in start_tcp > (new): TRACE: eventlet.listen socket = ((host, port), backlog = backlog) > (new): TRACE: File "/ > usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in > listen > (new): TRACE: sock.bind (addr) > (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth > (new): TRACE: return getattr (self._sock, name) (* args) > (new): TRACE: error: [Errno 13] Permission denied > what can be? > > Thanks > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Thu Oct 27 09:39:06 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 07:39:06 -0200 Subject: [Openstack-operators] nova-vnc proxy In-Reply-To: References: Message-ID: Very good work, but when I click on the dashboard vnc he tries to open the ip127.0.0.1 and not the server ip, you know where you configure it? 2011/10/27 Jae Sang Lee > Hi, > Maybe.. > $ sudo start nova-vncproxy <--- this command was run by user 'nova' > > so, Try run nova-vncproxy by root. > # nova-vncproxy & > > > 2011/10/27 Roberto Dalas Z. Benavides > >> Hello, I could not start The New vncproxy in the error log shows: >> >> new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in >> (new): TRACE: host = FLAGS.vncproxy_host) >> (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", line >> 116, in start_tcp >> (new): TRACE: eventlet.listen socket = ((host, port), backlog = backlog) >> (new): TRACE: File "/ >> usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in >> listen >> (new): TRACE: sock.bind (addr) >> (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth >> (new): TRACE: return getattr (self._sock, name) (* args) >> (new): TRACE: error: [Errno 13] Permission denied >> what can be? >> >> Thanks >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Thu Oct 27 16:12:01 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 14:12:01 -0200 Subject: [Openstack-operators] Multiple Hypervisors Message-ID: Hello, I wonder if I install a nova-controller and two nova-computers, with each nova-computer connected to a different hypervisor,. Being with one another with KVM and VMware. If I can, as I do that? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Thu Oct 27 16:59:52 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Thu, 27 Oct 2011 14:59:52 -0200 Subject: [Openstack-operators] nova-vnc proxy In-Reply-To: References: Message-ID: I did it, but when I click on the "vnc" in the dashboard, he directs it to the ip 127.0.0.1:6080 / vnc_auto.html?-token = 9a4bcfa9 ...... I wonder where this ip changes. thank you 2011/10/27 Jae Sang Lee > Hi, > Maybe.. > $ sudo start nova-vncproxy <--- this command was run by user 'nova' > > so, Try run nova-vncproxy by root. > # nova-vncproxy & > > > 2011/10/27 Roberto Dalas Z. Benavides > >> Hello, I could not start The New vncproxy in the error log shows: >> >> new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in >> (new): TRACE: host = FLAGS.vncproxy_host) >> (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", line >> 116, in start_tcp >> (new): TRACE: eventlet.listen socket = ((host, port), backlog = backlog) >> (new): TRACE: File "/ >> usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in >> listen >> (new): TRACE: sock.bind (addr) >> (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth >> (new): TRACE: return getattr (self._sock, name) (* args) >> (new): TRACE: error: [Errno 13] Permission denied >> what can be? >> >> Thanks >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlan at bloomenterprises.org Thu Oct 27 17:28:35 2011 From: harlan at bloomenterprises.org (Harlan H. Bloom) Date: Thu, 27 Oct 2011 12:28:35 -0500 (CDT) Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: Message-ID: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Hello Everyone, I setup OpenStackDashboard according to the instructions on this wiki: http://wiki.openstack.org/OpenStackDashboard However, I can't seem to figure out how to login to the Dashboard. I've tried: User/pass: root/localpassword, admin/admin, admin/999888777666, other local unix usernames and passwords. I do have keystone setup and it appears to be running correctly. I'm running on Ubuntu Server 11.10. All of OpenStack is running on this computer; this is a test system until we get more comfortable with OpenStack before setting up the "real" hardware. I can create VM's from the command line and connect to them just fine. I only need to use the pem files created during OpenStack installation. We would very much prefer to use the website for most of our users. If you need any other information, please let me know. Thank you for your time and attention, Harlan... -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdshaonimran at gmail.com Thu Oct 27 17:59:12 2011 From: mdshaonimran at gmail.com (Shaon) Date: Thu, 27 Oct 2011 23:59:12 +0600 Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> References: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Message-ID: Login with the username/password you created during the nova installation. On Thu, Oct 27, 2011 at 11:28 PM, Harlan H. Bloom < harlan at bloomenterprises.org> wrote: > Hello Everyone, > I setup OpenStackDashboard according to the instructions on this wiki: > http://wiki.openstack.org/OpenStackDashboard > > However, I can't seem to figure out how to login to the Dashboard. I've > tried: > User/pass: root/localpassword, admin/admin, admin/999888777666, other > local unix usernames and passwords. > > I do have keystone setup and it appears to be running correctly. > > I'm running on Ubuntu Server 11.10. All of OpenStack is running on this > computer; this is a test system until we get more comfortable with OpenStack > before setting up the "real" hardware. > > I can create VM's from the command line and connect to them just fine. I > only need to use the pem files created during OpenStack installation. We > would very much prefer to use the website for most of our users. > > If you need any other information, please let me know. > > Thank you for your time and attention, > > Harlan... > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- thanks -shaon http://mdshaonimran.wordpress.com http://twitter.com/mdshaonimran -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Fri Oct 28 01:27:46 2011 From: hyangii at gmail.com (Jae Sang Lee) Date: Fri, 28 Oct 2011 10:27:46 +0900 Subject: [Openstack-operators] nova-vnc proxy In-Reply-To: References: Message-ID: You can change vncproxy host by nova.conf. --vncproxy_host= 2011/10/28 Roberto Dalas Z. Benavides > I did it, but when I click on the "vnc" in the dashboard, he directs it to > the ip 127.0.0.1:6080 / vnc_auto.html?-token = 9a4bcfa9 ...... > I wonder where this ip changes. > > thank you > > 2011/10/27 Jae Sang Lee > >> Hi, >> Maybe.. >> $ sudo start nova-vncproxy <--- this command was run by user 'nova' >> >> so, Try run nova-vncproxy by root. >> # nova-vncproxy & >> >> >> 2011/10/27 Roberto Dalas Z. Benavides >> >>> Hello, I could not start The New vncproxy in the error log shows: >>> >>> new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in >>> (new): TRACE: host = FLAGS.vncproxy_host) >>> (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", >>> line 116, in start_tcp >>> (new): TRACE: eventlet.listen socket = ((host, port), backlog = backlog) >>> (new): TRACE: File "/ >>> usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in >>> listen >>> (new): TRACE: sock.bind (addr) >>> (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth >>> (new): TRACE: return getattr (self._sock, name) (* args) >>> (new): TRACE: error: [Errno 13] Permission denied >>> what can be? >>> >>> Thanks >>> >>> _______________________________________________ >>> Openstack-operators mailing list >>> Openstack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Fri Oct 28 01:32:32 2011 From: hyangii at gmail.com (Jae Sang Lee) Date: Fri, 28 Oct 2011 10:32:32 +0900 Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: References: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Message-ID: You should login using keystone information. In Keystone DB, there are user information. If you run 'sampledata' when setup keystone, maybe admin user set password 'secrete' try to input 'admin/secrete' 2011/10/28 Shaon > Login with the username/password you created during the nova installation. > > On Thu, Oct 27, 2011 at 11:28 PM, Harlan H. Bloom < > harlan at bloomenterprises.org> wrote: > >> Hello Everyone, >> I setup OpenStackDashboard according to the instructions on this wiki: >> http://wiki.openstack.org/OpenStackDashboard >> >> However, I can't seem to figure out how to login to the Dashboard. I've >> tried: >> User/pass: root/localpassword, admin/admin, admin/999888777666, other >> local unix usernames and passwords. >> >> I do have keystone setup and it appears to be running correctly. >> >> I'm running on Ubuntu Server 11.10. All of OpenStack is running on this >> computer; this is a test system until we get more comfortable with OpenStack >> before setting up the "real" hardware. >> >> I can create VM's from the command line and connect to them just fine. >> I only need to use the pem files created during OpenStack installation. We >> would very much prefer to use the website for most of our users. >> >> If you need any other information, please let me know. >> >> Thank you for your time and attention, >> >> Harlan... >> >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > > -- > thanks > -shaon > > http://mdshaonimran.wordpress.com > http://twitter.com/mdshaonimran > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 08:24:42 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 06:24:42 -0200 Subject: [Openstack-operators] nova-vnc proxy In-Reply-To: References: Message-ID: Very good!! Thanks 2011/10/27 Jae Sang Lee > You can change vncproxy host by nova.conf. > --vncproxy_host= > > 2011/10/28 Roberto Dalas Z. Benavides > > I did it, but when I click on the "vnc" in the dashboard, he directs it to >> the ip 127.0.0.1:6080 / vnc_auto.html?-token = 9a4bcfa9 ...... >> I wonder where this ip changes. >> >> thank you >> >> 2011/10/27 Jae Sang Lee >> >>> Hi, >>> Maybe.. >>> $ sudo start nova-vncproxy <--- this command was run by user 'nova' >>> >>> so, Try run nova-vncproxy by root. >>> # nova-vncproxy & >>> >>> >>> 2011/10/27 Roberto Dalas Z. Benavides >>> >>>> Hello, I could not start The New vncproxy in the error log shows: >>>> >>>> new): TRACE: File "/ usr / bin / new-vncproxy", line 116, in >>>> (new): TRACE: host = FLAGS.vncproxy_host) >>>> (new): TRACE: File "/ usr/lib/python2.7/dist-packages/nova/wsgi.py", >>>> line 116, in start_tcp >>>> (new): TRACE: eventlet.listen socket = ((host, port), backlog = >>>> backlog) >>>> (new): TRACE: File "/ >>>> usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in >>>> listen >>>> (new): TRACE: sock.bind (addr) >>>> (new): TRACE: File "/ usr/lib/python2.7/socket.py", line 224, in meth >>>> (new): TRACE: return getattr (self._sock, name) (* args) >>>> (new): TRACE: error: [Errno 13] Permission denied >>>> what can be? >>>> >>>> Thanks >>>> >>>> _______________________________________________ >>>> Openstack-operators mailing list >>>> Openstack-operators at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 08:28:34 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 06:28:34 -0200 Subject: [Openstack-operators] 1 controller and multiple hypervisors Message-ID: Hello, I have two server node and a server controller. Each node points to a KVM. I wonder how the installation of the servers will be made when I click on launch Dashboard. Is it random? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 08:33:04 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 06:33:04 -0200 Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> References: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Message-ID: Hi Harlan, use this document: http://cssoss.wordpress.com/2011/04/27/openstack-beginners-guide-for-ubuntu-11-04-installation-and-configuration/ Robert 2011/10/27 Harlan H. Bloom > Hello Everyone, > I setup OpenStackDashboard according to the instructions on this wiki: > http://wiki.openstack.org/OpenStackDashboard > > However, I can't seem to figure out how to login to the Dashboard. I've > tried: > User/pass: root/localpassword, admin/admin, admin/999888777666, other > local unix usernames and passwords. > > I do have keystone setup and it appears to be running correctly. > > I'm running on Ubuntu Server 11.10. All of OpenStack is running on this > computer; this is a test system until we get more comfortable with OpenStack > before setting up the "real" hardware. > > I can create VM's from the command line and connect to them just fine. I > only need to use the pem files created during OpenStack installation. We > would very much prefer to use the website for most of our users. > > If you need any other information, please let me know. > > Thank you for your time and attention, > > Harlan... > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From diego.parrilla at stackops.com Fri Oct 28 08:40:30 2011 From: diego.parrilla at stackops.com (Diego Parrilla) Date: Fri, 28 Oct 2011 10:40:30 +0200 Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: References: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Message-ID: On Fri, Oct 28, 2011 at 10:33 AM, Roberto Dalas Z. Benavides < betodalas at gmail.com> wrote: > Hi Harlan, use this document: > > > http://cssoss.wordpress.com/2011/04/27/openstack-beginners-guide-for-ubuntu-11-04-installation-and-configuration/ > > This is valid for Cactus versions of Nova and pre-keystone integration of the Dashboard. Diablo/stable and Essex branches need keystone for dashboard afaik. Diego > Robert > > 2011/10/27 Harlan H. Bloom > >> Hello Everyone, >> I setup OpenStackDashboard according to the instructions on this wiki: >> http://wiki.openstack.org/OpenStackDashboard >> >> However, I can't seem to figure out how to login to the Dashboard. I've >> tried: >> User/pass: root/localpassword, admin/admin, admin/999888777666, other >> local unix usernames and passwords. >> >> I do have keystone setup and it appears to be running correctly. >> >> I'm running on Ubuntu Server 11.10. All of OpenStack is running on this >> computer; this is a test system until we get more comfortable with OpenStack >> before setting up the "real" hardware. >> >> I can create VM's from the command line and connect to them just fine. >> I only need to use the pem files created during OpenStack installation. We >> would very much prefer to use the website for most of our users. >> >> If you need any other information, please let me know. >> >> Thank you for your time and attention, >> >> Harlan... >> >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 08:56:51 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 06:56:51 -0200 Subject: [Openstack-operators] Logging into OpenStackDashboard In-Reply-To: References: <579d6428-7b44-492d-8ff7-02b2f48cb316@starx2> Message-ID: In my works. What is the difference? 2011/10/28 Diego Parrilla > > On Fri, Oct 28, 2011 at 10:33 AM, Roberto Dalas Z. Benavides < > betodalas at gmail.com> wrote: > >> Hi Harlan, use this document: >> >> >> http://cssoss.wordpress.com/2011/04/27/openstack-beginners-guide-for-ubuntu-11-04-installation-and-configuration/ >> >> > > This is valid for Cactus versions of Nova and pre-keystone integration of > the Dashboard. Diablo/stable and Essex branches need keystone for dashboard > afaik. > > Diego > > > >> Robert >> >> 2011/10/27 Harlan H. Bloom >> >>> Hello Everyone, >>> I setup OpenStackDashboard according to the instructions on this wiki: >>> http://wiki.openstack.org/OpenStackDashboard >>> >>> However, I can't seem to figure out how to login to the Dashboard. >>> I've tried: >>> User/pass: root/localpassword, admin/admin, admin/999888777666, >>> other local unix usernames and passwords. >>> >>> I do have keystone setup and it appears to be running correctly. >>> >>> I'm running on Ubuntu Server 11.10. All of OpenStack is running on >>> this computer; this is a test system until we get more comfortable with >>> OpenStack before setting up the "real" hardware. >>> >>> I can create VM's from the command line and connect to them just fine. >>> I only need to use the pem files created during OpenStack installation. We >>> would very much prefer to use the website for most of our users. >>> >>> If you need any other information, please let me know. >>> >>> Thank you for your time and attention, >>> >>> Harlan... >>> >>> >>> _______________________________________________ >>> Openstack-operators mailing list >>> Openstack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >>> >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 09:47:58 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 07:47:58 -0200 Subject: [Openstack-operators] Inject ip vmware Message-ID: Hello, I have a new machine with compute-and KVM. With the option - flat_injected = true nova.conf I can inject the ips on vms. In vmware is to do this or only works in KVM? Thank you very much -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Fri Oct 28 10:41:38 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 08:41:38 -0200 Subject: [Openstack-operators] Multiple nodes, priority Message-ID: Hello, I installed a cloud controller and two nodes with KVM. When I click install on the dashboard, it installs the kvm vms randomly at home. I wonder if it is to set prior to installation. thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From sateesh.chodapuneedi at citrix.com Fri Oct 28 11:57:24 2011 From: sateesh.chodapuneedi at citrix.com (Sateesh Chodapuneedi) Date: Fri, 28 Oct 2011 17:27:24 +0530 Subject: [Openstack-operators] Inject ip vmware In-Reply-To: References: Message-ID: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C246@BANPMAILBOX01.citrite.net> Yes, the flag (flat_injected = true) works for nova vmware driver too. Regards, Sateesh ---------------------------------------------------------------------------------------------------------------------------- "This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure, or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message." From: openstack-operators-bounces at lists.openstack.org [mailto:openstack-operators-bounces at lists.openstack.org] On Behalf Of Roberto Dalas Z. Benavides Sent: Friday, October 28, 2011 3:18 PM To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] Inject ip vmware Hello, I have a new machine with compute-and KVM. With the option - flat_injected = true nova.conf I can inject the ips on vms. In vmware is to do this or only works in KVM? Thank you very much -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: image001.gif URL: From betodalas at gmail.com Fri Oct 28 12:02:57 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 10:02:57 -0200 Subject: [Openstack-operators] Inject ip vmware In-Reply-To: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C246@BANPMAILBOX01.citrite.net> References: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C246@BANPMAILBOX01.citrite.net> Message-ID: Not the iamge injecting ip, ip runs out. My nova.conf looks like: --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --verbose --libvirt_type=qemu #--lock_path=/tmp --connection_type=libvirt --s3_host=10.168.1.4 --rabbit_host=10.168.1.4 --cc_host=10.168.1.4 --ec2_url=http://10.168.1.4:8773/services/Cloud --fixed_range=192.168.1.0/24 --network_size=250 --ec2_api=10.168.1.4 --routing_source_ip=10.168.1.4 --verbose --sql_connection=mysql://root:status64 at 10.168.1.4/nova --network_manager=nova.network.manager.FlatManager --glance_api_servers=10.168.1.30:9292 --image_service=nova.image.glance.GlanceImageService --flat_interface=eth0 --flat_injected=true --connection_type=vmwareapi --vmwareapi_host_ip=10.168.1.7:443 --vmwareapi_host_username=root --vmwareapi_host_password=status64 --vmwareapi_wsdl_loc=http://10.168.1.4/vimService.wsdl --vncproxy_url=http://10.168.1.31:6080 --vncproxy_host=10.168.1.4 --vncproxy_port=6080 --vnc_console_proxy_url=http://10.168.1.31:6080 --vnc_enabled=True #--ajax_console_proxy_url=http://10.168.1.4:8000 #--ajax_console_proxy_port=8000 need anything else? 2011/10/28 Sateesh Chodapuneedi > Yes, the flag (flat_injected = true) works for nova vmware driver too.**** > > ** ** > > Regards,**** > > Sateesh**** > > ** ** > > > ---------------------------------------------------------------------------------------------------------------------------- > **** > > "This e-mail message is for the sole use of the intended recipient(s) and > may contain confidential and/or privileged information. Any unauthorized > review, use, disclosure, or distribution is prohibited. If you are not the > intended recipient, please contact the sender by reply e-mail and destroy > all copies of the original message." > [image: Description: > http://www6.integrityatwork.net/integrity/courses/ic1c/ic1cSTD/ic1cSTD_templates/shim.gif] > **** > > ** ** > > *From:* openstack-operators-bounces at lists.openstack.org [mailto: > openstack-operators-bounces at lists.openstack.org] *On Behalf Of *Roberto > Dalas Z. Benavides > *Sent:* Friday, October 28, 2011 3:18 PM > *To:* openstack-operators at lists.openstack.org > *Subject:* [Openstack-operators] Inject ip vmware**** > > ** ** > > Hello, I have a new machine with compute-and KVM. With the option - > flat_injected = true nova.conf I can inject the ips on vms. > In vmware is to do this or only works in KVM? > > Thank you very much **** > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: not available URL: From diego.parrilla.santamaria at gmail.com Fri Oct 28 12:06:16 2011 From: diego.parrilla.santamaria at gmail.com (=?ISO-8859-1?Q?Diego_Parrilla_Santamar=EDa?=) Date: Fri, 28 Oct 2011 14:06:16 +0200 Subject: [Openstack-operators] Inject ip vmware In-Reply-To: References: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C246@BANPMAILBOX01.citrite.net> Message-ID: It seems --connection_type is twice in the file: On Fri, Oct 28, 2011 at 2:02 PM, Roberto Dalas Z. Benavides < betodalas at gmail.com> wrote: > Not the iamge injecting ip, ip runs out. My nova.conf looks like: > > > --dhcpbridge_flagfile=/etc/nova/nova.conf > --dhcpbridge=/usr/bin/nova-dhcpbridge > --logdir=/var/log/nova > --state_path=/var/lib/nova > --verbose > --libvirt_type=qemu > #--lock_path=/tmp > --connection_type=libvirt > > --s3_host=10.168.1.4 > --rabbit_host=10.168.1.4 > --cc_host=10.168.1.4 > --ec2_url=http://10.168.1.4:8773/services/Cloud > --fixed_range=192.168.1.0/24 > --network_size=250 > --ec2_api=10.168.1.4 > --routing_source_ip=10.168.1.4 > --verbose > --sql_connection=mysql://root:status64 at 10.168.1.4/nova > --network_manager=nova.network.manager.FlatManager > --glance_api_servers=10.168.1.30:9292 > --image_service=nova.image.glance.GlanceImageService > --flat_interface=eth0 > --flat_injected=true > --connection_type=vmwareapi > --vmwareapi_host_ip=10.168.1.7:443 > --vmwareapi_host_username=root > --vmwareapi_host_password=status64 > --vmwareapi_wsdl_loc=http://10.168.1.4/vimService.wsdl > > --vncproxy_url=http://10.168.1.31:6080 > --vncproxy_host=10.168.1.4 > --vncproxy_port=6080 > --vnc_console_proxy_url=http://10.168.1.31:6080 > --vnc_enabled=True > > #--ajax_console_proxy_url=http://10.168.1.4:8000 > #--ajax_console_proxy_port=8000 > > need anything else? > > 2011/10/28 Sateesh Chodapuneedi > >> Yes, the flag (flat_injected = true) works for nova vmware driver too.*** >> * >> >> ** ** >> >> Regards,**** >> >> Sateesh**** >> >> ** ** >> >> >> ---------------------------------------------------------------------------------------------------------------------------- >> **** >> >> "This e-mail message is for the sole use of the intended recipient(s) and >> may contain confidential and/or privileged information. Any unauthorized >> review, use, disclosure, or distribution is prohibited. If you are not the >> intended recipient, please contact the sender by reply e-mail and destroy >> all copies of the original message." >> [image: Description: >> http://www6.integrityatwork.net/integrity/courses/ic1c/ic1cSTD/ic1cSTD_templates/shim.gif] >> **** >> >> ** ** >> >> *From:* openstack-operators-bounces at lists.openstack.org [mailto: >> openstack-operators-bounces at lists.openstack.org] *On Behalf Of *Roberto >> Dalas Z. Benavides >> *Sent:* Friday, October 28, 2011 3:18 PM >> *To:* openstack-operators at lists.openstack.org >> *Subject:* [Openstack-operators] Inject ip vmware**** >> >> ** ** >> >> Hello, I have a new machine with compute-and KVM. With the option - >> flat_injected = true nova.conf I can inject the ips on vms. >> In vmware is to do this or only works in KVM? >> >> Thank you very much **** >> > > > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: not available URL: From betodalas at gmail.com Fri Oct 28 12:22:20 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 10:22:20 -0200 Subject: [Openstack-operators] Inject ip vmware In-Reply-To: References: <35F04D4C394874409D9BE4BF45AC5EA9DE12B2C246@BANPMAILBOX01.citrite.net> Message-ID: I took this IPCA but not injected into the network card. Eth0 is the only inet6 addr. Thanks 2011/10/28 Diego Parrilla Santamar?a > It seems --connection_type is twice in the file: > > On Fri, Oct 28, 2011 at 2:02 PM, Roberto Dalas Z. Benavides < > betodalas at gmail.com> wrote: > >> Not the iamge injecting ip, ip runs out. My nova.conf looks like: >> >> >> --dhcpbridge_flagfile=/etc/nova/nova.conf >> --dhcpbridge=/usr/bin/nova-dhcpbridge >> --logdir=/var/log/nova >> --state_path=/var/lib/nova >> --verbose >> --libvirt_type=qemu >> #--lock_path=/tmp >> --connection_type=libvirt >> >> --s3_host=10.168.1.4 >> --rabbit_host=10.168.1.4 >> --cc_host=10.168.1.4 >> --ec2_url=http://10.168.1.4:8773/services/Cloud >> --fixed_range=192.168.1.0/24 >> --network_size=250 >> --ec2_api=10.168.1.4 >> --routing_source_ip=10.168.1.4 >> --verbose >> --sql_connection=mysql://root:status64 at 10.168.1.4/nova >> --network_manager=nova.network.manager.FlatManager >> --glance_api_servers=10.168.1.30:9292 >> --image_service=nova.image.glance.GlanceImageService >> --flat_interface=eth0 >> --flat_injected=true >> --connection_type=vmwareapi >> --vmwareapi_host_ip=10.168.1.7:443 >> --vmwareapi_host_username=root >> --vmwareapi_host_password=status64 >> --vmwareapi_wsdl_loc=http://10.168.1.4/vimService.wsdl >> >> --vncproxy_url=http://10.168.1.31:6080 >> --vncproxy_host=10.168.1.4 >> --vncproxy_port=6080 >> --vnc_console_proxy_url=http://10.168.1.31:6080 >> --vnc_enabled=True >> >> #--ajax_console_proxy_url=http://10.168.1.4:8000 >> #--ajax_console_proxy_port=8000 >> >> need anything else? >> >> 2011/10/28 Sateesh Chodapuneedi >> >>> Yes, the flag (flat_injected = true) works for nova vmware driver too.** >>> ** >>> >>> ** ** >>> >>> Regards,**** >>> >>> Sateesh**** >>> >>> ** ** >>> >>> >>> ---------------------------------------------------------------------------------------------------------------------------- >>> **** >>> >>> "This e-mail message is for the sole use of the intended recipient(s) and >>> may contain confidential and/or privileged information. Any unauthorized >>> review, use, disclosure, or distribution is prohibited. If you are not the >>> intended recipient, please contact the sender by reply e-mail and destroy >>> all copies of the original message." >>> [image: Description: >>> http://www6.integrityatwork.net/integrity/courses/ic1c/ic1cSTD/ic1cSTD_templates/shim.gif] >>> **** >>> >>> ** ** >>> >>> *From:* openstack-operators-bounces at lists.openstack.org [mailto: >>> openstack-operators-bounces at lists.openstack.org] *On Behalf Of *Roberto >>> Dalas Z. Benavides >>> *Sent:* Friday, October 28, 2011 3:18 PM >>> *To:* openstack-operators at lists.openstack.org >>> *Subject:* [Openstack-operators] Inject ip vmware**** >>> >>> ** ** >>> >>> Hello, I have a new machine with compute-and KVM. With the option - >>> flat_injected = true nova.conf I can inject the ips on vms. >>> In vmware is to do this or only works in KVM? >>> >>> Thank you very much **** >>> >> >> >> _______________________________________________ >> Openstack-operators mailing list >> Openstack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 43 bytes Desc: not available URL: From betodalas at gmail.com Fri Oct 28 13:58:45 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Fri, 28 Oct 2011 11:58:45 -0200 Subject: [Openstack-operators] Criteria for distribution of vm Message-ID: Hello, what is the criteria for installation of machines between hypervisors vm? Availability of resources? -------------- next part -------------- An HTML attachment was scrubbed... URL: From betodalas at gmail.com Mon Oct 31 09:45:59 2011 From: betodalas at gmail.com (Roberto Dalas Z. Benavides) Date: Mon, 31 Oct 2011 07:45:59 -0200 Subject: [Openstack-operators] Dashboard + Keystone Message-ID: Hello everyone, I'm trying to access the dashboard using the sta Keystone and giving the error: [31/Oct/2011 07:06:07] "POST / auth / login /? Next = / dash / HTTP/1.1" 200 1363 [31/Oct/2011 07:17:41] "GET / auth / login /? Next = / dash / HTTP/1.1" 200 1228 DEBUG: novaclient.client: REQ: http://10.168.1.4:5000/v2.0/tokens curl-i-X POST-H "Content-Type: application / json"-H "User-Agent: python-novaclient " DEBUG: novaclient.client: BODY REQ: {"auth": {"passwordCredentials": { "username": "dualtec", "password": "status64"}}} DEBUG: novaclient.client: RESP: {'status': '400', 'content-length': 24, 'content-ty pe': 'text / plain'} [Errno 111] ECONNREFUSED When I give the command / etc / init.d / keystone start but it shows that started falling again. Does anyone know what can be? thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From andi.abes at gmail.com Thu Oct 20 15:42:46 2011 From: andi.abes at gmail.com (andi abes) Date: Thu, 20 Oct 2011 15:42:46 -0000 Subject: [Openstack-operators] swift proxy server problem In-Reply-To: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C92@EXMB01CMS.surrey.ac.uk> References: <16D2E990EC74F04799F77FA02C7E1F8D010CF98C9C92@EXMB01CMS.surrey.ac.uk> Message-ID: on quick look, it seems that your proxy past config is a bit off. would be useful to include your proxy config file, as well as version info (cactus / diablo, milestone.. etc) As a side note - rather than try to install from scratch manually, look for the some of existing deployment scripts out there. There are some using Chef, some puppet and some more comprehensive. a. On Thu, Oct 20, 2011 at 11:14 AM, wrote: > Hi All, > > I'm trying to set up swift and am having an issue with getting the proxy > service to start, after a > swift-init proxy start > > the proxy does not start and I see this in the logs: > > Oct 20 16:12:14 storage05 proxy-server UNCAUGHT EXCEPTION#012Traceback > (most recent call last):#012 File "/usr/bin/swift-proxy-server", line 22, > in #012 run_wsgi(conf_file, 'proxy-server', default_port=8080, > **options)#012 File "/usr/lib/pymodules/python2.6/swift/common/wsgi.py", > line 126, in run_wsgi#012 app = loadapp('config:%s' % conf_file, > global_conf={'log_name': log_name})#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 204, in > loadapp#012 return loadobj(APP, uri, name=name, **kw)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 224, in > loadobj#012 global_conf=global_conf)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in > loadcontext#012 global_conf=global_conf)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 278, in > _loadconfig#012 return loader.get_context(object_type, name, > global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/l > oadwsgi.py", line 405, in get_context#012 > global_additions=global_additions)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 503, in > _pipeline_app_context#012 for name in pipeline[:-1]]#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 409, in > get_context#012 section)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 431, in > _context_from_use#012 object_type, name=use, global_conf=global_conf)#012 > File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 361, in > get_context#012 global_conf=global_conf)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in > loadcontext#012 global_conf=global_conf)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 285, in > _loadegg#012 return loader.get_context(object_type, name, > global_conf)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 561, in > get_context#012 object_type > , name=name)#012 File > "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 587, > > > Any help appreciated. > > Regards > > John O'Loughlin > FEPS IT, Service Delivery Team Leader > _______________________________________________ > Openstack-operators mailing list > Openstack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: