[Openstack] Openstack Digest, Vol 4, Issue 29
Heidi Bretz
heidi at openstack.org
Fri Oct 25 15:50:20 UTC 2013
y
-----Original Message-----
From: openstack-request at lists.openstack.org
[mailto:openstack-request at lists.openstack.org]
Sent: Friday, October 25, 2013 5:00 AM
To: openstack at lists.openstack.org
Subject: Openstack Digest, Vol 4, Issue 29
Send Openstack mailing list submissions to
openstack at lists.openstack.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
or, via email, send a message with subject or body 'help' to
openstack-request at lists.openstack.org
You can reach the person managing the list at
openstack-owner at lists.openstack.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Openstack digest..."
Today's Topics:
1. Re: Migrate instances/tenants between clouds (Joe Topjian)
2. Re: Migrate instances/tenants between clouds (Robert Collins)
3. Re: Help for creating the bridge br100 (Razique Mahroua)
4. Re: Migrate instances/tenants between clouds (Joshua Harlow)
5. Re: Migrate instances/tenants between clouds (Robert Collins)
6. libvirtd and Folsom Quantum/Neutron and iptables (Craig E. Ward)
7. Re: Migrate instances/tenants between clouds (Joshua Harlow)
8. The VM got incorrect DNS server IP (???)
9. Re: Directional network performance issues with Neutron +
OpenvSwitch (Martinx - ?????)
10. Test Mail (Rajashree Thorat)
11. The same IP configured on different compute nodes (???)
12. Re: Snapshot failure with VMwareVCDriver (Gary Kotton)
13. The dashboard crash after attaching a volume to a VM (???)
14. Re: libvirtd and Folsom Quantum/Neutron and iptables
(Daniel P. Berrange)
15. ??: The dashboard crash after attaching a volume to a VM (???)
16. Re: ??: The dashboard crash after attaching a volume to a VM
(Lingala Srikanth Kumar-B37208)
17. Re: Migrate instances/tenants between clouds (Alexander Stellwag)
18. Re: Migrate instances/tenants between clouds (Alexander Stellwag)
19. Re: Directional network performance issues with Neutron +
OpenvSwitch (Darragh O'Reilly)
20. Doc or Link for enabling Active Directory authentication for
login information in Havana (ankush grover)
----------------------------------------------------------------------
Message: 1
Date: Thu, 24 Oct 2013 15:30:04 -0600
From: Joe Topjian <joe.topjian at cybera.ca>
To: Joshua Harlow <harlowja at yahoo-inc.com>
Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
Subject: Re: [Openstack] Migrate instances/tenants between clouds
Message-ID:
<CAD07=DcLMFFkmOj7cufmKWJK4RLN5tsdxvq5bShiu6JDAWvZEA at mail.gmail.com>
Content-Type: text/plain; charset="gb2312"
I think the ability to migrate instances and projects between clouds is
very valid and applies to other use cases besides avoiding an upgrade.
The instance migration process that Tim describes is one that we use, too.
It's relatively easy for end-users and requires no admin intervention.
Project migration is a very interesting concept. I'd imagine it would
include things like users, key pair entries, security groups, networks, etc.
I agree with Joshua about why it's bad to avoid upgrades. I've seen a ton
of effort go into making in-place upgrades possible and safe and it's very
much appreciated. I upgraded an environment from Folsom to Grizzly a few
weeks ago and none of the users noticed it happened.
I definitely don't recommend blindly doing "apt-get dist-upgrade" on a
production environment, though :)
On the flip side, sometimes it can be easier to just migrate rather than
upgrade in-place. I think where the line is drawn with that decision varies
from person to person.
On Thu, Oct 24, 2013 at 2:56 PM, Joshua Harlow
<harlowja at yahoo-inc.com>wrote:
> I agree its not simple (from experience),
>
> There is a difference though between 'simpler for now' and the 'right
> thing to do',
>
> If software doesn't support the right thing to do ('upgrade',
> 'dist-upgrade' - or yum equivalent) then the community needs to make that
> work (and stop with new features).
>
> Otherwise the community is in a whole world of trouble. If openstack is
> to make the life of operations & other people better and the only way to
> operate it is to build new regions, then something feels backwards in this
> equation.
>
> IMHO bypassing the problem is not a long-term solution (and is not
> healthy for openstack to even recommend this).
>
> I don't even think its a short-term solution honestly since building a
> new region costs $$ and downtime. Who wants to do that every 6 months, I
> can surely tell u I don't :)
>
> From: Martinx - ????? <thiagocmartinsc at gmail.com>
> Date: Thursday, October 24, 2013 1:17 PM
> To: Joshua Harlow <harlowja at yahoo-inc.com>
> Cc: Tim Bell <Tim.Bell at cern.ch>, Alexander Stellwag <
> openstack at stellwag.net>, "openstack at lists.openstack.org" <
> openstack at lists.openstack.org>
> Subject: Re: [Openstack] Migrate instances/tenants between clouds
>
> I do not even want to try to upgrade my system...
>
> I know that DPKG is awesome and handle system upgrades carefully but,
> sounds more simpler to just build a new Region...
>
>
> This doesn't sound trivial (like `apt-get update ; apt-get
> dist-upgrade`):
>
>
>
http://www.openstack.org/summit/portland-2013/session-videos/presentation/ge
tting-from-grizzly-to-havana-a-devops-upgrade-pattern
>
> Don't you think!?
>
>
>
> On 24 October 2013 17:48, Joshua Harlow <harlowja at yahoo-inc.com> wrote:
>
>> Whatever happened to doing in-place upgrades?
>>
>> Has that been problematic for u, are people just not doing it? If they
are
>> avoiding it, why?
>>
>> Shouldn't just going from folsom->havana work, if not, why not; it
worries
>> me that this isn't priority #0 if it doesn't work.
>>
>> On 10/24/13 10:51 AM, "Tim Bell" <Tim.Bell at cern.ch> wrote:
>>
>> >
>> >Have you tried
>> >
>> >- snapshot
>> >- glance download of the snapshot
>> >- glance upload of the snapshot to new instance
>> >- boot from snapshot
>> >
>> >The process we use at CERN is documented at
>> >
>>
http://information-technology.web.cern.ch/book/cern-cloud-infrastructure-u
>> >ser-guide/images/migrating-using-images
>> >
>> >This could be a good technique to document in the standard CERN
openstack
>> >user guide.
>> >
>> >BTW, this does increase the storage in your glance server since you'll
>> >loose any commonality between multiple images. So, make sure you've lots
>> >of space on Glance.
>> >
>> >Tim
>> >
>> >> -----Original Message-----
>> >> From: Alexander Stellwag [mailto:openstack at stellwag.net]
>> >> Sent: 24 October 2013 16:53
>> >> To: openstack at lists.openstack.org
>> >> Subject: [Openstack] Migrate instances/tenants between clouds
>> >>
>> >> Hi stackers,
>> >>
>> >> we're looking for a tool / script / blueprint to migrate instances or
>> >>even complete tenants between multiple installations of OpenStack
>> >> (possibly running different versions).
>> >>
>> >> I searched around the net but didn't find anything appropriate. Is any
>> >>of you aware of such a tool?
>> >>
>> >> The current use-case is a migration from a folsom/nova-network based
>> >>installation into our new havana/neutron based cloud. It is not
>> >> necessary to migrate instances and volumes online but it should work
at
>> >>least semi-automatically to make it usable in large deployments.
>> >>
>> >> Any hints would be greatly appreciated.
>> >>
>> >> Cheers,
>> >> Alex
>> >> --
>> >> Alexander Stellwag
>> >> Deutsche Telekom AG Products & Innovation Infrastructure Design
>> >
>> >
>> >_______________________________________________
>> >Mailing list:
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >Post to : openstack at lists.openstack.org
>> >Unsubscribe :
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
--
Joe Topjian
Systems Architect
Cybera Inc.
www.cybera.ca
Cybera is a not-for-profit organization that works to spur and support
innovation, for the economic benefit of Alberta, through the use
of cyberinfrastructure.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131024/b92c35a
0/attachment-0001.html>
------------------------------
Message: 2
Date: Fri, 25 Oct 2013 11:03:16 +1300
From: Robert Collins <robertc at robertcollins.net>
To: Shane Johnson <sdj at rasmussenequipment.com>
Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
Subject: Re: [Openstack] Migrate instances/tenants between clouds
Message-ID:
<CAJ3HoZ1+c+=1gsd=JLXdxQ8hNkMAoVQGtZx1112BSPCZ3LVRJg at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On 25 October 2013 10:20, Shane Johnson <sdj at rasmussenequipment.com> wrote:
>
>
> But doesn't that responsibility lie with the package maintainers for
> Openstack in each distro?
No. The responsibility for making it possible to upgrade in place
*reliably* and *with confidence* is an entirely upstream problem.
Rigorous backwards compatibility, rigorous testing for performance
regressions, reliable upgrade paths, being able to achieve low/no
downtime on the cloud during upgrade. All upstream problems.
Putting the code into packages and making the packages install well is
a distribution problem. For distributions with orchestration code tied
into them, orchestrating the upgrade is also a distribution problem
(e.g. RDO and UO both offer fully orchestrated deployments, so any
upgrade orchestration will be bundled there).
-Rob
--
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud
------------------------------
Message: 3
Date: Thu, 24 Oct 2013 15:28:02 -0700
From: Razique Mahroua <razique.mahroua at gmail.com>
To: ??? <dongjh at nci.com.cn>
Cc: openstack <Openstack at lists.openstack.org>
Subject: Re: [Openstack] Help for creating the bridge br100
Message-ID: <25E77099-8A48-4CDE-8B17-83837D0CCC24 at gmail.com>
Content-Type: text/plain; charset="utf-8"
No you don?t have to.
Check the instance? log to see if it retrieves an IP :)
Le 23 oct. 2013 ? 23:34, ??? <dongjh at nci.com.cn> a ?crit :
>
> Sure.
>
> root at controller:~# nova secgroup-list
> +----+---------+-------------+
> | Id | Name | Description |
> +----+---------+-------------+
> | 1 | default | default |
> +----+---------+-------------+
> root at controller:~# nova secgroup-list-rules 1
> +-------------+-----------+---------+-----------+--------------+
> | IP Protocol | From Port | To Port | IP Range | Source Group |
> +-------------+-----------+---------+-----------+--------------+
> | tcp | 22 | 22 | 0.0.0.0/0 | |
> | icmp | -1 | -1 | 0.0.0.0/0 | |
> +-------------+-----------+---------+-----------+--------------+
>
> From: Razique Mahroua
> Date: 2013-10-24 14:25
> To: ???
> CC: openstack
> Subject: Re: [Openstack] Help for creating the bridge br100
> Did you created the security rules for ICMP echo req/reply ?
>
> On Oct 23, 2013, at 22:40, ??? <dongjh at nci.com.cn> wrote:
>
>>
>> Hi Mahroua,
>>
>> As you suggested, i started the instances after adding the bridge br100
manually, the VM's started successfully, but were not pingable, however the
IP for the bridge itself was pingable, what is the issue? Should i assoiate
a physical NIC with br100?
>>
>> root at compute1:~# nova list
>>
+--------------------------------------+--------+--------+------------+-----
--------+----------------------+
>> | ID | Name | Status | Task State |
Power State | Networks |
>>
+--------------------------------------+--------+--------+------------+-----
--------+----------------------+
>> | 4560a831-e3e4-4db9-860e-361834d9cca4 | cirrOS | ACTIVE | None |
Running | vmnet=192.168.11.195 |
>> | ac659c91-520b-4b35-8d0d-7a5f2a6c82c4 | cirrOS | ACTIVE | None |
Running | vmnet=192.168.11.196 |
>>
+--------------------------------------+--------+--------+------------+-----
--------+----------------------+
>> root at compute1:~# ip addr
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>> inet 169.254.169.254/32 scope link lo
>> inet6 ::1/128 scope host
>> valid_lft forever preferred_lft forever
>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
qlen 1000
>> link/ether b8:ca:3a:ec:66:8c brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::baca:3aff:feec:668c/64 scope link
>> valid_lft forever preferred_lft forever
>> 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>> link/ether b8:ca:3a:ec:66:8e brd ff:ff:ff:ff:ff:ff
>> 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>> link/ether b8:ca:3a:ec:66:90 brd ff:ff:ff:ff:ff:ff
>> 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>> link/ether b8:ca:3a:ec:66:92 brd ff:ff:ff:ff:ff:ff
>> 6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
qlen 1000
>> link/ether 00:0a:f7:24:1e:40 brd ff:ff:ff:ff:ff:ff
>> inet 10.10.10.182/24 brd 10.10.10.255 scope global eth4
>> inet6 fe80::20a:f7ff:fe24:1e40/64 scope link
>> valid_lft forever preferred_lft forever
>> 7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>> link/ether 00:0a:f7:24:1e:42 brd ff:ff:ff:ff:ff:ff
>> 9: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN
>> link/ether b2:43:fa:53:cc:11 brd ff:ff:ff:ff:ff:ff
>> inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
>> 10: br100: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>> link/ether fa:16:3e:f3:f0:04 brd ff:ff:ff:ff:ff:ff
>> inet 192.168.11.193/27 brd 192.168.11.223 scope global br100
>> 11: vlan100 at br100: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop
master br100 state DOWN
>> link/ether fa:16:3e:f3:f0:04 brd ff:ff:ff:ff:ff:ff
>> 12: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master br100 state UNKNOWN qlen 500
>> link/ether fe:16:3e:fa:60:5c brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::fc16:3eff:fefa:605c/64 scope link
>> valid_lft forever preferred_lft forever
>> root at compute1:~# ifconfig -a
>> br100 Link encap:Ethernet HWaddr fa:16:3e:f3:f0:04
>> inet addr:192.168.11.193 Bcast:192.168.11.223
Mask:255.255.255.224
>> BROADCAST MULTICAST MTU:1500 Metric:1
>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:0
>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>
>> eth0 Link encap:Ethernet HWaddr b8:ca:3a:ec:66:8c
>> inet6 addr: fe80::baca:3aff:feec:668c/64 Scope:Link
>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>> RX packets:28816 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:2514 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:1000
>> RX bytes:2924850 (2.9 MB) TX bytes:868260 (868.2 KB)
>> Interrupt:34 Memory:d1000000-d17fffff
>>
>> eth1 Link encap:Ethernet HWaddr b8:ca:3a:ec:66:8e
>> BROADCAST MULTICAST MTU:1500 Metric:1
>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:1000
>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>> Interrupt:36 Memory:d2000000-d27fffff
>>
>> eth2 Link encap:Ethernet HWaddr b8:ca:3a:ec:66:90
>> BROADCAST MULTICAST MTU:1500 Metric:1
>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:1000
>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>> Interrupt:36 Memory:d3000000-d37fffff
>>
>> eth3 Link encap:Ethernet HWaddr b8:ca:3a:ec:66:92
>> BROADCAST MULTICAST MTU:1500 Metric:1
>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:1000
>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>> Interrupt:37 Memory:d4000000-d47fffff
>>
>> eth4 Link encap:Ethernet HWaddr 00:0a:f7:24:1e:40
>> inet addr:10.10.10.182 Bcast:10.10.10.255 Mask:255.255.255.0
>> inet6 addr: fe80::20a:f7ff:fe24:1e40/64 Scope:Link
>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>> RX packets:243487 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:238920 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:1000
>> RX bytes:94402647 (94.4 MB) TX bytes:58522363 (58.5 MB)
>> Interrupt:72 Memory:c8000000-c87fffff
>>
>> eth5 Link encap:Ethernet HWaddr 00:0a:f7:24:1e:42
>> BROADCAST MULTICAST MTU:1500 Metric:1
>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:1000
>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>> Interrupt:76 Memory:c9000000-c97fffff
>>
>> lo Link encap:Local Loopback
>> inet addr:127.0.0.1 Mask:255.0.0.0
>> inet6 addr: ::1/128 Scope:Host
>> UP LOOPBACK RUNNING MTU:16436 Metric:1
>> RX packets:8 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:0
>> RX bytes:672 (672.0 B) TX bytes:672 (672.0 B)
>>
>> virbr0 Link encap:Ethernet HWaddr b2:43:fa:53:cc:11
>> inet addr:192.168.122.1 Bcast:192.168.122.255
Mask:255.255.255.0
>> UP BROADCAST MULTICAST MTU:1500 Metric:1
>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:0
>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>
>> vlan100 Link encap:Ethernet HWaddr fa:16:3e:f3:f0:04
>> BROADCAST MULTICAST MTU:1500 Metric:1
>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:0
>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>
>> vnet0 Link encap:Ethernet HWaddr fe:16:3e:fa:60:5c
>> inet6 addr: fe80::fc16:3eff:fefa:605c/64 Scope:Link
>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>> RX packets:9 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:500
>> RX bytes:1434 (1.4 KB) TX bytes:468 (468.0 B)
>>
>> root at compute1:~# ping 192.168.11.193
>> PING 192.168.11.193 (192.168.11.193) 56(84) bytes of data.
>> 64 bytes from 192.168.11.193: icmp_req=1 ttl=64 time=0.070 ms
>> 64 bytes from 192.168.11.193: icmp_req=2 ttl=64 time=0.036 ms
>> ^C
>> --- 192.168.11.193 ping statistics ---
>> 2 packets transmitted, 2 received, 0% packet loss, time 999ms
>> rtt min/avg/max/mdev = 0.036/0.053/0.070/0.017 ms
>> root at compute1:~# ping 192.168.11.195
>> connect: Network is unreachable
>> root at compute1:~# ping 192.168.11.196
>> connect: Network is unreachable
>> root at compute1:~# brctl show br100
>> bridge name bridge id STP enabled interfaces
>> br100 8000.fa163ef3f004 no vlan100
>>
vnet0
>>
>>
>> From: Razique Mahroua
>> Date: 2013-10-24 12:23
>> To: ???
>> CC: Openstack
>> Subject: Re: [Openstack] Help for creating the bridge br100
>> Hi,
>> you can try:
>> brctl addbr br100
>> then restart nova-network and nova-compute services, they'll hook it up
correctly :)
>>
>> Le 23 oct. 2013 ? 04:55, ??? <dongjh at nci.com.cn> a ?crit :
>>
>>> Hello all,
>>>
>>> I'm trying to install openstack and lauch the first VM, following with
the installation guide "openstack-install-guide-apt-havana.pdf". I have one
controller node plus two compute nodes.
>>>
>>> After creating the network vmnet, i tried to lauch the first CirrOS VM,
i saw the bridge br100 was created on the first compute node automatically,
and an IP was assigned to this new created bridge(the last valid IP in vmnet
subnet), however i found the netmask for vmnet was not correct, so i removed
the bridge br100 from the first compute node via 'brctl' command, then
removed the network 'vmnet' via the commands "nova network-disassociate" and
"nova net-delete", finally i added the network 'vmnet' back with the correct
netmask. After doing that, when i try to lauch a VM, it will fail, the error
message is 'cannot setup interface: No such device', even if you stop the
'nova-network' and 'nova-compute' services on the first compute node(i.e.
the same issue on the second compute node), and sometimes when i restart the
'nova-network' and 'nova-compute' services on the compute nodes, it will
throw out the similar error messages, and the services fail to start, but
not always.
>>>
>>> How can i have nova create the bridge br100 on the compute nodes
automatically? Should i clean the mysql database or re-install the
nova-network package on the compute nodes?
>>>
>>> Thanks.
>>>
>>> root at controller:~# nova image-list
>>>
+--------------------------------------+--------------+--------+--------+
>>> | ID | Name | Status | Server
|
>>>
+--------------------------------------+--------------+--------+--------+
>>> | 26fa8866-d075-444d-9844-61b7c22e724b | CirrOS 0.3.1 | ACTIVE |
|
>>>
+--------------------------------------+--------------+--------+--------+
>>> root at controller:~# nova help start
>>> usage: nova start <server>
>>>
>>> Start a server.
>>>
>>> Positional arguments:
>>> <server> Name or ID of server.
>>> root at controller:~# nova boot --flavor 1 --key_name mykey --image
26fa8866-d075-444d-9844-61b7c22e724b --security_group default cirrOS
>>>
+--------------------------------------+------------------------------------
--+
>>> | Property | Value
|
>>>
+--------------------------------------+------------------------------------
--+
>>> | OS-EXT-STS:task_state | scheduling
|
>>> | image | CirrOS 0.3.1
|
>>> | OS-EXT-STS:vm_state | building
|
>>> | OS-EXT-SRV-ATTR:instance_name | instance-0000000b
|
>>> | OS-SRV-USG:launched_at | None
|
>>> | flavor | m1.tiny
|
>>> | id |
0d3afadb-ec5c-4f26-ad7f-707679be6b3a |
>>> | security_groups | [{u'name': u'default'}]
|
>>> | user_id |
eecb2b5f2b4f481980a5546af680481c |
>>> | OS-DCF:diskConfig | MANUAL
|
>>> | accessIPv4 |
|
>>> | accessIPv6 |
|
>>> | progress | 0
|
>>> | OS-EXT-STS:power_state | 0
|
>>> | OS-EXT-AZ:availability_zone | nova
|
>>> | config_drive |
|
>>> | status | BUILD
|
>>> | updated | 2013-10-23T11:40:23Z
|
>>> | hostId |
|
>>> | OS-EXT-SRV-ATTR:host | None
|
>>> | OS-SRV-USG:terminated_at | None
|
>>> | key_name | mykey
|
>>> | OS-EXT-SRV-ATTR:hypervisor_hostname | None
|
>>> | name | cirrOS
|
>>> | adminPass | FSUxpN9QrkhC
|
>>> | tenant_id |
382ce85ef00948a3a1442e44f9d033ed |
>>> | created | 2013-10-23T11:40:23Z
|
>>> | os-extended-volumes:volumes_attached | []
|
>>> | metadata | {}
|
>>>
+--------------------------------------+------------------------------------
--+
>>> root at controller:~# nova show 0d3afadb-ec5c-4f26-ad7f-707679be6b3a
>>>
+--------------------------------------+------------------------------------
----------------------------------------------------------------------------
-------------------+
>>> | Property | Value
|
>>>
+--------------------------------------+------------------------------------
----------------------------------------------------------------------------
-------------------+
>>> | status | ERROR
|
>>> | updated | 2013-10-23T11:40:29Z
|
>>> | OS-EXT-STS:task_state | None
|
>>> | OS-EXT-SRV-ATTR:host | compute1
|
>>> | key_name | mykey
|
>>> | image | CirrOS 0.3.1
(26fa8866-d075-444d-9844-61b7c22e724b)
|
>>> | vmnet network | 192.168.11.195
|
>>> | hostId |
5ce24c402b2346e375fca2455e0d6dbaf0405f2d46b1e6eaf3b30742
|
>>> | OS-EXT-STS:vm_state | error
|
>>> | OS-EXT-SRV-ATTR:instance_name | instance-0000000b
|
>>> | OS-SRV-USG:launched_at | None
|
>>> | OS-EXT-SRV-ATTR:hypervisor_hostname | compute1
|
>>> | flavor | m1.tiny (1)
|
>>> | id |
0d3afadb-ec5c-4f26-ad7f-707679be6b3a
|
>>> | security_groups | [{u'name': u'default'}]
|
>>> | OS-SRV-USG:terminated_at | None
|
>>> | user_id |
eecb2b5f2b4f481980a5546af680481c
|
>>> | name | cirrOS
|
>>> | created | 2013-10-23T11:40:23Z
|
>>> | tenant_id |
382ce85ef00948a3a1442e44f9d033ed
|
>>> | OS-DCF:diskConfig | MANUAL
|
>>> | metadata | {}
|
>>> | os-extended-volumes:volumes_attached | []
|
>>> | accessIPv4 |
|
>>> | accessIPv6 |
|
>>> | fault | {u'message': u"Remote error:
ProcessExecutionError Unexpected error while running command.
|
>>> | | Command: sudo nova-rootwrap
/etc/nova/rootwrap.conf dhcp_release br100 192.168.11.195 fa:16:3e:13:e9:90
|
>>> | | Exit code: 1
|
>>> | | Stdout: ''
|
>>> | | Stderr: 'cannot setup
interface: No such device\
|
>>> | | '
|
>>> | | ", u'code': 500, u'details': u'
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 258,
in decorated_function |
>>> | | return function(self,
context, *args, **kwargs)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1613, in
run_instance |
>>> | | do_run_instance()
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line
246, in inner |
>>> | | return f(*args, **kwargs)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1612, in
do_run_instance |
>>> | | legacy_bdm_in_spec)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 962, in
_run_instance |
>>> | | notify("error",
msg=unicode(e)) # notify that build failed
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 946, in
_run_instance |
>>> | | instance, image_meta,
legacy_bdm_in_spec)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1075, in
_build_instance |
>>> | | filter_properties, bdms,
legacy_bdm_in_spec)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1119, in
_reschedule_or_error |
>>> | |
self._log_original_error(exc_info, instance_uuid)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1114, in
_reschedule_or_error |
>>> | | bdms, requested_networks)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1664, in
_shutdown_instance |
>>> | |
self._try_deallocate_network(context, instance, requested_networks)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1624, in
_try_deallocate_network |
>>> | |
self._set_instance_error_state(context, instance[\'uuid\'])
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1619, in
_try_deallocate_network |
>>> | |
self._deallocate_network(context, instance, requested_networks)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1463, in
_deallocate_network |
>>> | | context, instance,
requested_networks=requested_networks)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/network/api.py", line 93, in wrapped
|
>>> | | return func(self, context,
*args, **kwargs)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/network/api.py", line 317, in
deallocate_for_instance |
>>> | |
self.network_rpcapi.deallocate_for_instance(context, **args)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/network/rpcapi.py", line 193, in
deallocate_for_instance |
>>> | | host=host,
requested_networks=requested_networks)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/rpcclient.py", line 85, in call
|
>>> | | return
self._invoke(self.proxy.call, ctxt, method, **kwargs)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/rpcclient.py", line 63, in _invoke
|
>>> | | return cast_or_call(ctxt,
msg, **self.kwargs)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py", line
126, in call |
>>> | | result = rpc.call(context,
real_topic, msg, timeout)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py",
line 139, in call |
>>> | | return
_get_impl().call(CONF, context, topic, msg, timeout)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
line 816, in call |
>>> | |
rpc_amqp.get_connection_pool(conf, Connection))
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
574, in call |
>>> | | rv = list(rv)
|
>>> | | File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
539, in __iter__ |
>>> | | raise result
|
>>> | | ', u'created':
u'2013-10-23T11:40:29Z'}
|
>>> | OS-EXT-STS:power_state | 0
|
>>> | OS-EXT-AZ:availability_zone | nova
|
>>> | config_drive |
|
>>>
+--------------------------------------+------------------------------------
----------------------------------------------------------------------------
-------------------+
>>> root at controller:~#
>>>
>>>
>>> root at controller:~# nova service-list
>>>
+------------------+------------+----------+---------+-------+--------------
--------------+-----------------+
>>> | Binary | Host | Zone | Status | State |
Updated_at | Disabled Reason |
>>>
+------------------+------------+----------+---------+-------+--------------
--------------+-----------------+
>>> | nova-cert | controller | internal | enabled | up |
2013-10-23T11:41:59.000000 | None |
>>> | nova-consoleauth | controller | internal | enabled | up |
2013-10-23T11:42:02.000000 | None |
>>> | nova-scheduler | controller | internal | enabled | up |
2013-10-23T11:42:04.000000 | None |
>>> | nova-conductor | controller | internal | enabled | up |
2013-10-23T11:42:00.000000 | None |
>>> | nova-network | compute1 | internal | enabled | up |
2013-10-23T11:42:04.000000 | None |
>>> | nova-network | compute2 | internal | enabled | up |
2013-10-23T11:41:56.000000 | None |
>>> | nova-compute | compute1 | nova | enabled | up |
2013-10-23T11:41:58.000000 | None |
>>> | nova-compute | compute2 | nova | enabled | up |
2013-10-23T11:42:00.000000 | None |
>>>
+------------------+------------+----------+---------+-------+--------------
--------------+-----------------+
>>>
>>> root at compute1:~# ifconfig -a
>>> eth0 Link encap:Ethernet HWaddr b8:ca:3a:ec:66:8c
>>> inet6 addr: fe80::baca:3aff:feec:668c/64 Scope:Link
>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>>> RX packets:56 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:1000
>>> RX bytes:6989 (6.9 KB) TX bytes:4298 (4.2 KB)
>>> Interrupt:34 Memory:d1000000-d17fffff
>>>
>>> eth1 Link encap:Ethernet HWaddr b8:ca:3a:ec:66:8e
>>> BROADCAST MULTICAST MTU:1500 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:1000
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>> Interrupt:36 Memory:d2000000-d27fffff
>>>
>>> eth2 Link encap:Ethernet HWaddr b8:ca:3a:ec:66:90
>>> BROADCAST MULTICAST MTU:1500 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:1000
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>> Interrupt:36 Memory:d3000000-d37fffff
>>>
>>> eth3 Link encap:Ethernet HWaddr b8:ca:3a:ec:66:92
>>> BROADCAST MULTICAST MTU:1500 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:1000
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>> Interrupt:37 Memory:d4000000-d47fffff
>>>
>>> eth4 Link encap:Ethernet HWaddr 00:0a:f7:24:1e:40
>>> inet addr:10.10.10.182 Bcast:10.10.10.255 Mask:255.255.255.0
>>> inet6 addr: fe80::20a:f7ff:fe24:1e40/64 Scope:Link
>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>>> RX packets:551 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:476 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:1000
>>> RX bytes:95077 (95.0 KB) TX bytes:70754 (70.7 KB)
>>> Interrupt:72 Memory:c8000000-c87fffff
>>>
>>> eth5 Link encap:Ethernet HWaddr 00:0a:f7:24:1e:42
>>> BROADCAST MULTICAST MTU:1500 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:1000
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>> Interrupt:76 Memory:c9000000-c97fffff
>>>
>>> lo Link encap:Local Loopback
>>> inet addr:127.0.0.1 Mask:255.0.0.0
>>> inet6 addr: ::1/128 Scope:Host
>>> UP LOOPBACK RUNNING MTU:16436 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>>
>>> virbr0 Link encap:Ethernet HWaddr b2:43:fa:53:cc:11
>>> inet addr:192.168.122.1 Bcast:192.168.122.255
Mask:255.255.255.0
>>> UP BROADCAST MULTICAST MTU:1500 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>>
>>>
>>> My internal NIC is eth4 and eth0 will be the bridge/public interface.
>>>
>>>
>>> _______________________________________________
>>> Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack at lists.openstack.org
>>> Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131024/01256c5
2/attachment-0001.html>
------------------------------
Message: 4
Date: Thu, 24 Oct 2013 22:57:54 +0000
From: Joshua Harlow <harlowja at yahoo-inc.com>
To: Robert Collins <robertc at robertcollins.net>, Shane Johnson
<sdj at rasmussenequipment.com>
Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
Subject: Re: [Openstack] Migrate instances/tenants between clouds
Message-ID: <CE8EF36D.4AD51%harlowja at yahoo-inc.com>
Content-Type: text/plain; charset="us-ascii"
Completely agree, and I think there are many people that know its an
issue, which is positive news.
There are always people who 'pave' over there clouds instead, which to me
is not positive news;
In fact I think such activities are detrimental to the whole community
(your opinion may vary).
To me this is why openstack **must** provide a reference cloud and do as
many of these rigorous activities "checking" automatically.
-Josh
On 10/24/13 3:03 PM, "Robert Collins" <robertc at robertcollins.net> wrote:
>On 25 October 2013 10:20, Shane Johnson <sdj at rasmussenequipment.com>
>wrote:
>
>>
>>
>> But doesn't that responsibility lie with the package maintainers for
>> Openstack in each distro?
>
>No. The responsibility for making it possible to upgrade in place
>*reliably* and *with confidence* is an entirely upstream problem.
>Rigorous backwards compatibility, rigorous testing for performance
>regressions, reliable upgrade paths, being able to achieve low/no
>downtime on the cloud during upgrade. All upstream problems.
>
>Putting the code into packages and making the packages install well is
>a distribution problem. For distributions with orchestration code tied
>into them, orchestrating the upgrade is also a distribution problem
>(e.g. RDO and UO both offer fully orchestrated deployments, so any
>upgrade orchestration will be bundled there).
>
>-Rob
>
>
>--
>Robert Collins <rbtcollins at hp.com>
>Distinguished Technologist
>HP Converged Cloud
>
>_______________________________________________
>Mailing list:
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>Post to : openstack at lists.openstack.org
>Unsubscribe :
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
------------------------------
Message: 5
Date: Fri, 25 Oct 2013 12:00:11 +1300
From: Robert Collins <robertc at robertcollins.net>
To: Joshua Harlow <harlowja at yahoo-inc.com>
Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
Subject: Re: [Openstack] Migrate instances/tenants between clouds
Message-ID:
<CAJ3HoZ1E9D27J5b-cqx=-Yj7qxiUVNFc+NaniXzxQHHX8MZsjg at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On 25 October 2013 11:57, Joshua Harlow <harlowja at yahoo-inc.com> wrote:
> Completely agree, and I think there are many people that know its an
> issue, which is positive news.
>
> There are always people who 'pave' over there clouds instead, which to me
> is not positive news;
>
> In fact I think such activities are detrimental to the whole community
> (your opinion may vary).
>
> To me this is why openstack **must** provide a reference cloud and do as
> many of these rigorous activities "checking" automatically.
>
> -Josh
https://wiki.openstack.org/TripleO/TripleOCloud :)
-Rob
--
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud
------------------------------
Message: 6
Date: Thu, 24 Oct 2013 16:02:04 -0700
From: "Craig E. Ward" <cward at isi.edu>
To: OpenStack Mailing List <openstack at lists.openstack.org>
Subject: [Openstack] libvirtd and Folsom Quantum/Neutron and iptables
Message-ID: <5269A6EC.1 at ISI.EDU>
Content-Type: text/plain; charset=UTF-8; format=flowed
I have a Folsom installation that has re-tasked some of the host hardware.
What
was a nova compute node is now a Quantum (agent) node. In the conversion,
the
libvirtd service was not removed. It looks like it could be cause some
issues
with the iptables rules.
Will libvirtd insert rules that may conflict with the rules inserted by
Quautum? Or do I need to look elsewhere for conflicts?
Is there any reason to leave libvirtd running on a Folsom Quantum node?
Thanks,
Craig
--
Craig E. Ward
USC Information Sciences Institute
cward at ISI.EDU
------------------------------
Message: 7
Date: Thu, 24 Oct 2013 23:05:54 +0000
From: Joshua Harlow <harlowja at yahoo-inc.com>
To: Robert Collins <robertc at robertcollins.net>
Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
Subject: Re: [Openstack] Migrate instances/tenants between clouds
Message-ID: <CE8EF55A.4AD5B%harlowja at yahoo-inc.com>
Content-Type: text/plain; charset="iso-8859-1"
Yup, of course, that is being worked on, and I hope it is that :-)
It just scares me and others that it took this long; I think partly due to
the pave your cloud approach.
That?s part of what just feels backwards to me; how we ever got in this
situation in the first place that paving was considered a valid approach.
On 10/24/13 4:00 PM, "Robert Collins" <robertc at robertcollins.net> wrote:
>On 25 October 2013 11:57, Joshua Harlow <harlowja at yahoo-inc.com> wrote:
>> Completely agree, and I think there are many people that know its an
>> issue, which is positive news.
>>
>> There are always people who 'pave' over there clouds instead, which to
>>me
>> is not positive news;
>>
>> In fact I think such activities are detrimental to the whole community
>> (your opinion may vary).
>>
>> To me this is why openstack **must** provide a reference cloud and do as
>> many of these rigorous activities "checking" automatically.
>>
>> -Josh
>
>https://wiki.openstack.org/TripleO/TripleOCloud :)
>
>-Rob
>
>--
>Robert Collins <rbtcollins at hp.com>
>Distinguished Technologist
>HP Converged Cloud
------------------------------
Message: 8
Date: Fri, 25 Oct 2013 10:53:14 +0800
From: ??? <dongjh at nci.com.cn>
To: openstack <openstack at lists.openstack.org>
Subject: [Openstack] The VM got incorrect DNS server IP
Message-ID: <2013102510531072487314 at nci.com.cn>
Content-Type: text/plain; charset="gb2312"
Hi all,
I set the --dns1 and --dns2 when creating the vmnet, however the VM instance
get the default gatway IP as it's DNS server, what is the issue?
root at controller:~# nova network-create vmnet --bridge-interface=br100
--multi-host=T --gateway=192.168.11.254 --dns1=221.12.1.227
--dns2=221.12.1.228
root at controller:~# nova network-show d960f9aa-647a-4fa6-97d4-bc9e8b322015
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| bridge | br100 |
| vpn_public_port | 1000 |
| dhcp_start | 192.168.11.3 |
| bridge_interface | br100 |
| updated_at | 2013-10-25T02:01:38.000000 |
| id | d960f9aa-647a-4fa6-97d4-bc9e8b322015 |
| cidr_v6 | None |
| deleted_at | None |
| gateway | 192.168.11.254 |
| rxtx_base | None |
| label | vmnet |
| priority | None |
| project_id | 382ce85ef00948a3a1442e44f9d033ed |
| vpn_private_address | 192.168.11.2 |
| deleted | 0 |
| vlan | 100 |
| broadcast | 192.168.11.255 |
| netmask | 255.255.255.0 |
| injected | False |
| cidr | 192.168.11.0/24 |
| vpn_public_address | 10.10.10.182 |
| multi_host | True |
| dns2 | 221.12.1.228 |
| created_at | 2013-10-25T01:47:05.000000 |
| host | None |
| gateway_v6 | None |
| netmask_v6 | None |
| dns1 | 221.12.1.227 |
+---------------------+--------------------------------------+
root at controller:~# nova boot --flavor 1 --key_name mykey --image
26fa8866-d075-444d-9844-61b7c22e724b --security-groups default CirrOS
root at controller:~# nova console-log CirrOS
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 3.2.0-37-virtual (buildd at allspice) (gcc version
4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03
UTC 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
[ 0.000000] Command line: LABEL=cirros-rootfs ro console=tty1
console=ttyS0
[ 0.000000] KERNEL supported cpus:
[ 0.000000] Intel GenuineIntel
[ 0.000000] AMD AuthenticAMD
[ 0.000000] Centaur CentaurHauls
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: 0000000000000000 - 000000000009fc00 (usable)
[ 0.000000] BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved)
[ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
[ 0.000000] BIOS-e820: 0000000000100000 - 000000001fffe000 (usable)
[ 0.000000] BIOS-e820: 000000001fffe000 - 0000000020000000 (reserved)
[ 0.000000] BIOS-e820: 00000000feffc000 - 00000000ff000000 (reserved)
[ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] DMI 2.4 present.
[ 0.000000] No AGP bridge found
[ 0.000000] last_pfn = 0x1fffe max_arch_pfn = 0x400000000
[ 0.000000] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
[ 0.000000] found SMP MP-table at [ffff8800000f1610] f1610
[ 0.000000] Using GB pages for direct mapping
[ 0.000000] init_memory_mapping: 0000000000000000-000000001fffe000
[ 0.000000] RAMDISK: 1fc96000 - 1ffee000
[ 0.000000] ACPI: RSDP 00000000000f1470 00014 (v00 BOCHS )
[ 0.000000] ACPI: RSDT 000000001fffe450 00034 (v01 BOCHS BXPCRSDT
00000001 BXPC 00000001)
[ 0.000000] ACPI: FACP 000000001fffff80 00074 (v01 BOCHS BXPCFACP
00000001 BXPC 00000001)
[ 0.000000] ACPI: DSDT 000000001fffe490 01137 (v01 BXPC BXDSDT
00000001 INTL 20100528)
[ 0.000000] ACPI: FACS 000000001fffff40 00040
[ 0.000000] ACPI: SSDT 000000001ffff700 00838 (v01 BOCHS BXPCSSDT
00000001 BXPC 00000001)
[ 0.000000] ACPI: APIC 000000001ffff610 00078 (v01 BOCHS BXPCAPIC
00000001 BXPC 00000001)
[ 0.000000] ACPI: HPET 000000001ffff5d0 00038 (v01 BOCHS BXPCHPET
00000001 BXPC 00000001)
[ 0.000000] No NUMA configuration found
[ 0.000000] Faking a node at 0000000000000000-000000001fffe000
[ 0.000000] Initmem setup node 0 0000000000000000-000000001fffe000
[ 0.000000] NODE_DATA [000000001fff6000 - 000000001fffafff]
[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[ 0.000000] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
[ 0.000000] Zone PFN ranges:
[ 0.000000] DMA 0x00000010 -> 0x00001000
[ 0.000000] DMA32 0x00001000 -> 0x00100000
[ 0.000000] Normal empty
[ 0.000000] Movable zone start PFN for each node
[ 0.000000] early_node_map[2] active PFN ranges
[ 0.000000] 0: 0x00000010 -> 0x0000009f
[ 0.000000] 0: 0x00000100 -> 0x0001fffe
[ 0.000000] ACPI: PM-Timer IO Port: 0xb008
[ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
[ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI
0-23
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[ 0.000000] Using ACPI (MADT) for SMP configuration information
[ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs
[ 0.000000] PM: Registered nosave memory: 000000000009f000 -
00000000000a0000
[ 0.000000] PM: Registered nosave memory: 00000000000a0000 -
00000000000f0000
[ 0.000000] PM: Registered nosave memory: 00000000000f0000 -
0000000000100000
[ 0.000000] Allocating PCI resources starting at 20000000 (gap:
20000000:deffc000)
[ 0.000000] Booting paravirtualized kernel on KVM
[ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1
nr_node_ids:1
[ 0.000000] PERCPU: Embedded 28 pages/cpu @ffff88001fa00000 s82880 r8192
d23616 u2097152
[ 0.000000] kvm-clock: cpu 0, msr 0:1fa13681, primary cpu clock
[ 0.000000] KVM setup async PF for cpu 0
[ 0.000000] kvm-stealtime: cpu 0, msr 1fa0dd40
[ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total
pages: 128904
[ 0.000000] Policy zone: DMA32
[ 0.000000] Kernel command line: LABEL=cirros-rootfs ro console=tty1
console=ttyS0
[ 0.000000] PID hash table entries: 2048 (order: 2, 16384 bytes)
[ 0.000000] xsave/xrstor: enabled xstate_bv 0x7, cntxt size 0x340
[ 0.000000] Checking aperture...
[ 0.000000] No AGP bridge found
[ 0.000000] Memory: 496328k/524280k available (6541k kernel code, 452k
absent, 27500k reserved, 6652k data, 924k init)
[ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0,
CPUs=1, Nodes=1
[ 0.000000] Hierarchical RCU implementation.
[ 0.000000] RCU dyntick-idle grace-period acceleration is enabled.
[ 0.000000] NR_IRQS:4352 nr_irqs:256 16
[ 0.000000] Console: colour VGA+ 80x25
[ 0.000000] console [tty1] enabled
[ 0.000000] console [ttyS0] enabled
[ 0.000000] allocated 4194304 bytes of page_cgroup
[ 0.000000] please try 'cgroup_disable=memory' option if you don't want
memory cgroups
[ 0.000000] Detected 1799.999 MHz processor.
[ 0.008000] Calibrating delay loop (skipped) preset value.. 3599.99
BogoMIPS (lpj=7199996)
[ 0.008012] pid_max: default: 32768 minimum: 301
[ 0.009908] Security Framework initialized
[ 0.012034] AppArmor: AppArmor initialized
[ 0.013761] Yama: becoming mindful.
[ 0.016149] Dentry cache hash table entries: 65536 (order: 7, 524288
bytes)
[ 0.018938] Inode-cache hash table entries: 32768 (order: 6, 262144
bytes)
[ 0.020103] Mount-cache hash table entries: 256
[ 0.024189] Initializing cgroup subsys cpuacct
[ 0.026036] Initializing cgroup subsys memory
[ 0.028027] Initializing cgroup subsys devices
[ 0.029856] Initializing cgroup subsys freezer
[ 0.031662] Initializing cgroup subsys blkio
[ 0.032021] Initializing cgroup subsys perf_event
[ 0.034066] mce: CPU supports 10 MCE banks
[ 0.036514] SMP alternatives: switching to UP code
[ 0.067550] Freeing SMP alternatives: 24k freed
[ 0.068013] ACPI: Core revision 20110623
[ 0.070122] ftrace: allocating 27027 entries in 106 pages
[ 0.076340] Enabling x2apic
[ 0.077224] Enabled x2apic
[ 0.078324] Switched APIC routing to physical x2apic.
[ 0.081445] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.084007] CPU0: Intel Xeon E312xx (Sandy Bridge) stepping 01
[ 0.088004] Performance Events: SandyBridge events, Intel PMU driver.
[ 0.088004] PEBS disabled due to CPU errata.
[ 0.088004] ... version: 2
[ 0.088008] ... bit width: 48
[ 0.089110] ... generic registers: 8
[ 0.090209] ... value mask: 0000ffffffffffff
[ 0.091558] ... max period: 000000007fffffff
[ 0.092008] ... fixed-purpose events: 3
[ 0.093109] ... event mask: 00000007000000ff
[ 0.097374] NMI watchdog enabled, takes one hw-pmu counter.
[ 0.098954] Brought up 1 CPUs
[ 0.099872] Total of 1 processors activated (3599.99 BogoMIPS).
[ 0.100460] devtmpfs: initialized
[ 0.102153] EVM: security.selinux
[ 0.104034] EVM: security.SMACK64
[ 0.105015] EVM: security.capability
[ 0.106990] print_constraints: dummy:
[ 0.108127] RTC time: 2:01:50, date: 10/25/13
[ 0.109376] NET: Registered protocol family 16
[ 0.110719] ACPI: bus type pci registered
[ 0.112166] PCI: Using configuration type 1 for base access
[ 0.114470] bio: create slab <bio-0> at 0
[ 0.116122] ACPI: Added _OSI(Module Device)
[ 0.117299] ACPI: Added _OSI(Processor Device)
[ 0.118488] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 0.119729] ACPI: Added _OSI(Processor Aggregator Device)
[ 0.124139] ACPI: Interpreter enabled
[ 0.125195] ACPI: (supports S0 S3 S4 S5)
[ 0.126770] ACPI: Using IOAPIC for interrupt routing
[ 0.131027] ACPI: No dock devices found.
[ 0.132027] HEST: Table not found.
[ 0.133012] PCI: Using host bridge windows from ACPI; if necessary, use
"pci=nocrs" and report a bug
[ 0.135410] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[ 0.136153] pci_root PNP0A03:00: host bridge window [io 0x0000-0x0cf7]
[ 0.137816] pci_root PNP0A03:00: host bridge window [io 0x0d00-0xffff]
[ 0.140028] pci_root PNP0A03:00: host bridge window [mem
0x000a0000-0x000bffff]
[ 0.142002] pci_root PNP0A03:00: host bridge window [mem
0x80000000-0xfebfffff]
[ 0.163047] pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4
ACPI
[ 0.164041] pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4
SMB
[ 0.227143] pci0000:00: Unable to request _OSC control (_OSC support
mask: 0x1e)
[ 0.232237] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[ 0.234357] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[ 0.236134] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[ 0.238233] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[ 0.240585] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
[ 0.242403] vgaarb: device added:
PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
[ 0.244035] vgaarb: loaded
[ 0.244904] vgaarb: bridge control possible 0000:00:02.0
[ 0.246366] i2c-core: driver [aat2870] using legacy suspend method
[ 0.248033] i2c-core: driver [aat2870] using legacy resume method
[ 0.249642] SCSI subsystem initialized
[ 0.250778] usbcore: registered new interface driver usbfs
[ 0.252049] usbcore: registered new interface driver hub
[ 0.253461] usbcore: registered new device driver usb
[ 0.254867] PCI: Using ACPI for IRQ routing
[ 0.256341] NetLabel: Initializing
[ 0.257351] NetLabel: domain hash size = 128
[ 0.258529] NetLabel: protocols = UNLABELED CIPSOv4
[ 0.260045] NetLabel: unlabeled traffic allowed by default
[ 0.261552] HPET: 3 timers in total, 0 timers will be used for per-cpu
timer
[ 0.263294] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[ 0.264872] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
[ 0.276081] Switching to clocksource kvm-clock
[ 0.282780] AppArmor: AppArmor Filesystem Enabled
[ 0.284080] pnp: PnP ACPI init
[ 0.285013] ACPI: bus type pnp registered
[ 0.287002] pnp: PnP ACPI: found 8 devices
[ 0.288161] ACPI: ACPI bus type pnp unregistered
[ 0.297034] NET: Registered protocol family 2
[ 0.298261] IP route cache hash table entries: 4096 (order: 3, 32768
bytes)
[ 0.300133] TCP established hash table entries: 16384 (order: 6, 262144
bytes)
[ 0.302120] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[ 0.303758] TCP: Hash tables configured (established 16384 bind 16384)
[ 0.305385] TCP reno registered
[ 0.306312] UDP hash table entries: 256 (order: 1, 8192 bytes)
[ 0.307744] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
[ 0.309362] NET: Registered protocol family 1
[ 0.310533] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[ 0.311998] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[ 0.313493] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[ 0.315144] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ 0.316650] pci 0000:00:01.2: PCI INT D -> Link[LNKD] -> GSI 11 (level,
high) -> IRQ 11
[ 0.319274] pci 0000:00:01.2: PCI INT D disabled
[ 0.320875] Trying to unpack rootfs image as initramfs...
[ 0.324251] audit: initializing netlink socket (disabled)
[ 0.325646] type=2000 audit(1382666510.320:1): initialized
[ 0.388344] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[ 0.397689] VFS: Disk quotas dquot_6.5.2
[ 0.398852] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.400936] fuse init (API version 7.17)
[ 0.402108] msgmni has been set to 969
[ 0.420367] Block layer SCSI generic (bsg) driver version 0.4 loaded
(major 253)
[ 0.428481] io scheduler noop registered
[ 0.429707] io scheduler deadline registered (default)
[ 0.431208] io scheduler cfq registered
[ 0.432574] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[ 0.434173] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[ 0.436147] input: Power Button as
/devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[ 0.438347] ACPI: Power Button [PWRF]
[ 0.444983] ERST: Table is not found!
[ 0.446142] GHES: HEST is not enabled!
[ 0.452342] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10
[ 0.453984] virtio-pci 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 10
(level, high) -> IRQ 10
[ 0.457444] virtio-pci 0000:00:04.0: PCI INT A -> Link[LNKD] -> GSI 11
(level, high) -> IRQ 11
[ 0.468267] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ 0.469904] virtio-pci 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 10
(level, high) -> IRQ 10
[ 0.476718] Freeing initrd memory: 3424k freed
[ 0.478965] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[ 0.503265] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 0.527584] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
[ 0.552996] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 0.577254] 00:06: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
[ 0.579146] Linux agpgart interface v0.103
[ 0.582398] brd: module loaded
[ 0.584203] loop: module loaded
[ 0.611380] vda: vda1
[ 0.615719] scsi0 : ata_piix
[ 0.616858] scsi1 : ata_piix
[ 0.617907] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0a0 irq 14
[ 0.619750] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0a8 irq 15
[ 0.621986] Fixed MDIO Bus: probed
[ 0.623111] tun: Universal TUN/TAP device driver, 1.6
[ 0.624616] tun: (C) 1999-2004 Max Krasnyansky <maxk at qualcomm.com>
[ 0.652568] PPP generic driver version 2.4.2
[ 0.653997] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[ 0.655788] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[ 0.657553] uhci_hcd: USB Universal Host Controller Interface driver
[ 0.659329] uhci_hcd 0000:00:01.2: PCI INT D -> Link[LNKD] -> GSI 11
(level, high) -> IRQ 11
[ 0.663454] uhci_hcd 0000:00:01.2: UHCI Host Controller
[ 0.665060] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus
number 1
[ 0.667415] uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c080
[ 0.669280] hub 1-0:1.0: USB hub found
[ 0.670479] hub 1-0:1.0: 2 ports detected
[ 0.672063] usbcore: registered new interface driver libusual
[ 0.674003] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at
0x60,0x64 irq 1,12
[ 0.677425] serio: i8042 KBD port at 0x60,0x64 irq 1
[ 0.679082] serio: i8042 AUX port at 0x60,0x64 irq 12
[ 0.680856] mousedev: PS/2 mouse device common for all mice
[ 0.683009] input: AT Translated Set 2 keyboard as
/devices/platform/i8042/serio0/input/input1
[ 0.685864] rtc_cmos 00:01: RTC can wake from S4
[ 0.687825] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
[ 0.689877] rtc0: alarms up to one day, 114 bytes nvram, hpet irqs
[ 0.691849] device-mapper: uevent: version 1.0.3
[ 0.693459] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised:
dm-devel at redhat.com
[ 0.696127] cpuidle: using governor ladder
[ 0.697530] cpuidle: using governor menu
[ 0.698827] EFI Variables Facility v0.08 2004-May-17
[ 0.700653] TCP cubic registered
[ 0.701911] NET: Registered protocol family 10
[ 0.704050] NET: Registered protocol family 17
[ 0.705481] Registering the dns_resolver key type
[ 0.707209] registered taskstats version 1
[ 0.711976] Magic number: 1:75:6
[ 0.713308] rtc_cmos 00:01: setting system clock to 2013-10-25 02:01:51
UTC (1382666511)
[ 0.715726] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
[ 0.717490] EDD information not available.
[ 0.782439] Freeing unused kernel memory: 924k freed
[ 0.784270] Write protecting the kernel read-only data: 12288k
[ 0.792420] Freeing unused kernel memory: 1632k freed
[ 0.799098] Freeing unused kernel memory: 1200k freed
info: initramfs: up at 0.80
GROWROOT: CHANGED: partition=1 start=16065 old: size=64260 end=80325 new:
size=2072385,end=2088450
info: initramfs loading root from /dev/vda1
info: /etc/init.d/rc.sysinit: up at 1.13
Starting logging: OK
Initializing random number generator... done.
Starting acpid: OK
cirros-ds 'local' up at 1.26
no results found for mode=local. up 1.33. searched: nocloud configdrive ec2
Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending select for 192.168.11.191...
Lease of 192.168.11.191 obtained, lease time 120
deleting routers
route: SIOCDELRT: No such process
adding dns 192.168.11.254
cirros-ds 'net' up at 1.38
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 1.39. request failed
failed 2/20: up 3.48. request failed
failed 3/20: up 5.49. request failed
failed 4/20: up 7.50. request failed
failed 5/20: up 9.52. request failed
failed 6/20: up 11.53. request failed
failed 7/20: up 13.54. request failed
failed 8/20: up 15.55. request failed
failed 9/20: up 17.56. request failed
failed 10/20: up 19.57. request failed
failed 11/20: up 21.58. request failed
failed 12/20: up 23.59. request failed
failed 13/20: up 25.60. request failed
failed 14/20: up 27.61. request failed
failed 15/20: up 29.62. request failed
failed 16/20: up 31.63. request failed
failed 17/20: up 33.64. request failed
failed 18/20: up 35.65. request failed
failed 19/20: up 37.66. request failed
failed 20/20: up 39.67. request failed
failed to read iid from metadata. tried 20
no results found for mode=net. up 41.68. searched: nocloud configdrive ec2
failed to get instance-id of datasource
Starting dropbear sshd: generating rsa key... generating dsa key... OK
=== network info ===
if-info: lo,up,127.0.0.1,8,::1
if-info: eth0,up,192.168.11.191,24,fe80::f816:3eff:fe15:e111
ip-route:default via 192.168.11.254 dev eth0
ip-route:192.168.11.0/24 dev eth0 src 192.168.11.191
=== datasource: None None ===
=== cirros: current=0.3.1 uptime=42.09 ===
____ ____ ____
/ __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \
\___//_//_/ /_/ \____/___/
http://cirros-cloud.net
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login:
$ hostname
cirros
$ cat /etc/resolv.conf
search novalocal
nameserver 192.168.11.254
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131025/2454e1a
e/attachment-0001.html>
------------------------------
Message: 9
Date: Fri, 25 Oct 2013 01:58:57 -0200
From: Martinx - ????? <thiagocmartinsc at gmail.com>
To: "Speichert,Daniel" <djs428 at drexel.edu>
Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
Subject: Re: [Openstack] Directional network performance issues with
Neutron + OpenvSwitch
Message-ID:
<CAJSM8J0vfdDfTgNtvmL1Zg=uVtQvByfxN9=K+mBQsG3vFTvyWQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Hi Daniel,
I followed that page, my Instances MTU is lowered by DHCP Agent but, same
result: poor network performance (internal between Instances and when
trying to reach the Internet).
No matter if I use "dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf +
"dhcp-option-force=26,1400"" for my Neutron DHCP agent, or not (i.e. MTU =
1500), the result is almost the same.
I'll try VXLAN (or just VLANs) this weekend to see if I can get better
results...
Thanks!
Thiago
On 24 October 2013 17:38, Speichert,Daniel <djs428 at drexel.edu> wrote:
> We managed to bring the upload speed back to maximum on the instances
> through the use of this guide:****
>
>
>
http://docs.openstack.org/trunk/openstack-network/admin/content/openvswitch_
plugin.html
> ****
>
> ** **
>
> Basically, the MTU needs to be lowered for GRE tunnels. It can be done
> with DHCP as explained in the new trunk manual.****
>
> ** **
>
> Regards,****
>
> Daniel****
>
> ** **
>
> *From:* annegentle at justwriteclick.com [mailto:
> annegentle at justwriteclick.com] *On Behalf Of *Anne Gentle
> *Sent:* Thursday, October 24, 2013 12:08 PM
> *To:* Martinx - ?????
> *Cc:* Speichert,Daniel; openstack at lists.openstack.org
>
> *Subject:* Re: [Openstack] Directional network performance issues with
> Neutron + OpenvSwitch****
>
> ** **
>
> ** **
>
> ** **
>
> On Thu, Oct 24, 2013 at 10:37 AM, Martinx - ????? <
> thiagocmartinsc at gmail.com> wrote:****
>
> Precisely!****
>
> ** **
>
> The doc currently says to disable Namespace when using GRE, never did this
> before, look:****
>
> ** **
>
>
>
http://docs.openstack.org/trunk/install-guide/install/apt/content/install-ne
utron.install-plugin.ovs.gre.html
> ****
>
> ** **
>
> But on this very same doc, they say to enable it... Who knows?! =P****
>
> ** **
>
>
>
http://docs.openstack.org/trunk/install-guide/install/apt/content/section_ne
tworking-routers-with-private-networks.html
> ****
>
> ** **
>
> I stick with Namespace enabled...****
>
> ** **
>
> ** **
>
> Just a reminder, /trunk/ links are works in progress, thanks for bringing
> the mismatch to our attention, and we already have a doc bug filed:****
>
> ** **
>
> https://bugs.launchpad.net/openstack-manuals/+bug/1241056****
>
> ** **
>
> Review this patch: https://review.openstack.org/#/c/53380/****
>
> ** **
>
> Anne****
>
> ** **
>
> ** **
>
> ****
>
> Let me ask you something, when you enable ovs_use_veth, que Metadata and
> DHCP still works?!****
>
> ** **
>
> Cheers!****
>
> Thiago****
>
> ** **
>
> On 24 October 2013 12:22, Speichert,Daniel <djs428 at drexel.edu> wrote:****
>
> Hello everyone,****
>
> ****
>
> It seems we also ran into the same issue.****
>
> ****
>
> We are running Ubuntu Saucy with OpenStack Havana from Ubuntu Cloud
> archives (precise-updates).****
>
> ****
>
> The download speed to the VMs increased from 5 Mbps to maximum after
> enabling ovs_use_veth. Upload speed from the VMs is still terrible (max 1
> Mbps, usually 0.04 Mbps).****
>
> ****
>
> Here is the iperf between the instance and L3 agent (network node) inside
> namespace.****
>
> ****
>
> root at cloud:~# ip netns exec qrouter-a29e0200-d390-40d1-8cf7-7ac1cef5863a
> iperf -c 10.1.0.24 -r****
>
> ------------------------------------------------------------****
>
> Server listening on TCP port 5001****
>
> TCP window size: 85.3 KByte (default)****
>
> ------------------------------------------------------------****
>
> ------------------------------------------------------------****
>
> Client connecting to 10.1.0.24, TCP port 5001****
>
> TCP window size: 585 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 7] local 10.1.0.1 port 37520 connected with 10.1.0.24 port 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 7] 0.0-10.0 sec 845 MBytes 708 Mbits/sec****
>
> [ 6] local 10.1.0.1 port 5001 connected with 10.1.0.24 port 53006****
>
> [ 6] 0.0-31.4 sec 256 KBytes 66.7 Kbits/sec****
>
> ****
>
> We are using Neutron OpenVSwitch with GRE and namespaces.****
>
>
> A side question: the documentation says to disable namespaces with GRE and
> enable them with VLANs. It was always working well for us on Grizzly with
> GRE and namespaces and we could never get it to work without namespaces.
Is
> there any specific reason why the documentation is advising to disable it?
> ****
>
> ****
>
> Regards,****
>
> Daniel****
>
> ****
>
> *From:* Martinx - ????? [mailto:thiagocmartinsc at gmail.com]
> *Sent:* Thursday, October 24, 2013 3:58 AM
> *To:* Aaron Rosen
> *Cc:* openstack at lists.openstack.org****
>
>
> *Subject:* Re: [Openstack] Directional network performance issues with
> Neutron + OpenvSwitch****
>
> ****
>
> Hi Aaron,****
>
> ****
>
> Thanks for answering! =)****
>
> ****
>
> Lets work...****
>
> ****
>
> ---****
>
> ****
>
> TEST #1 - iperf between Network Node and its Uplink router (Data Center's
> gateway "Internet") - OVS br-ex / eth2****
>
> ****
>
> # Tenant Namespace route table****
>
> ****
>
> root at net-node-1:~# ip netns exec
> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 ip route****
>
> default via 172.16.0.1 dev qg-50b615b7-c2 ****
>
> 172.16.0.0/20 dev qg-50b615b7-c2 proto kernel scope link src
> 172.16.0.2 ****
>
> 192.168.210.0/24 dev qr-a1376f61-05 proto kernel scope link src
> 192.168.210.1 ****
>
> ****
>
> # there is a "iperf -s" running at 172.16.0.1 "Internet", testing it****
>
> ****
>
> root at net-node-1:~# ip netns exec
> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -c 172.16.0.1****
>
> ------------------------------------------------------------****
>
> Client connecting to 172.16.0.1, TCP port 5001****
>
> TCP window size: 22.9 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 5] local 172.16.0.2 port 58342 connected with 172.16.0.1 port 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 5] 0.0-10.0 sec 668 MBytes 559 Mbits/sec****
>
> ---****
>
> ****
>
> ---****
>
> ****
>
> TEST #2 - iperf on one instance to the Namespace of the L3 agent + uplink
> router****
>
> ****
>
> # iperf server running within Tenant's Namespace router****
>
> ****
>
> root at net-node-1:~# ip netns exec
> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -s****
>
> ****
>
> -****
>
> ****
>
> # from instance-1****
>
> ****
>
> ubuntu at instance-1:~$ ip route****
>
> default via 192.168.210.1 dev eth0 metric 100 ****
>
> 192.168.210.0/24 dev eth0 proto kernel scope link src 192.168.210.2 ***
> *
>
> ****
>
> # instance-1 performing tests against net-node-1 Namespace above****
>
> ****
>
> ubuntu at instance-1:~$ iperf -c 192.168.210.1****
>
> ------------------------------------------------------------****
>
> Client connecting to 192.168.210.1, TCP port 5001****
>
> TCP window size: 21.0 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 192.168.210.2 port 43739 connected with 192.168.210.1 port
> 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec 484 MBytes 406 Mbits/sec****
>
> ****
>
> # still on instance-1, now against "External IP" of its own Namespace /
> Router****
>
> ****
>
> ubuntu at instance-1:~$ iperf -c 172.16.0.2****
>
> ------------------------------------------------------------****
>
> Client connecting to 172.16.0.2, TCP port 5001****
>
> TCP window size: 21.0 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 192.168.210.2 port 34703 connected with 172.16.0.2 port 5001**
> **
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec 520 MBytes 436 Mbits/sec****
>
> ****
>
> # still on instance-1, now against the Data Center UpLink Router****
>
> ****
>
> ubuntu at instance-1:~$ iperf -c 172.16.0.1****
>
> ------------------------------------------------------------****
>
> Client connecting to 172.16.0.1, TCP port 5001****
>
> TCP window size: 21.0 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 192.168.210.4 port 38401 connected with 172.16.0.1 port 5001**
> **
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec * 324 MBytes 271 Mbits/sec*****
>
> ---****
>
> ****
>
> This latest test shows only 271 Mbits/s! I think it should be at least,
> 400~430 MBits/s... Right?!****
>
> ****
>
> ---****
>
> ****
>
> TEST #3 - Two instances on the same hypervisor****
>
> ****
>
> # iperf server****
>
> ****
>
> ubuntu at instance-2:~$ ip route****
>
> default via 192.168.210.1 dev eth0 metric 100 ****
>
> 192.168.210.0/24 dev eth0 proto kernel scope link src 192.168.210.4 ***
> *
>
> ****
>
> ubuntu at instance-2:~$ iperf -s****
>
> ------------------------------------------------------------****
>
> Server listening on TCP port 5001****
>
> TCP window size: 85.3 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 4] local 192.168.210.4 port 5001 connected with 192.168.210.2 port
> 45800****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 4] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec****
>
> ****
>
> # iperf client****
>
> ****
>
> ubuntu at instance-1:~$ iperf -c 192.168.210.4****
>
> ------------------------------------------------------------****
>
> Client connecting to 192.168.210.4, TCP port 5001****
>
> TCP window size: 21.0 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 192.168.210.2 port 45800 connected with 192.168.210.4 port
> 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec****
>
> ---****
>
> ****
>
> ---****
>
> ****
>
> TEST #4 - Two instances on different hypervisors - over GRE****
>
> ****
>
> root at instance-2:~# iperf -s****
>
> ------------------------------------------------------------****
>
> Server listening on TCP port 5001****
>
> TCP window size: 85.3 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 4] local 192.168.210.4 port 5001 connected with 192.168.210.2 port
> 34640****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 4] 0.0-10.0 sec 237 MBytes 198 Mbits/sec****
>
> ****
>
> ****
>
> root at instance-1:~# iperf -c 192.168.210.4****
>
> ------------------------------------------------------------****
>
> Client connecting to 192.168.210.4, TCP port 5001****
>
> TCP window size: 21.0 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 192.168.210.2 port 34640 connected with 192.168.210.4 port
> 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec 237 MBytes 198 Mbits/sec****
>
> ---****
>
> ****
>
> I just realized how slow is my intra-cloud (intra-VM) communication...
> :-/****
>
> ****
>
> ---****
>
> ****
>
> TEST #5 - Two hypervisors - "GRE TUNNEL LAN" - OVS local_ip / remote_ip***
> *
>
> ****
>
> # Same path of "TEST #4" but, testing the physical GRE path (where GRE
> traffic flows)****
>
> ****
>
> root at hypervisor-2:~$ iperf -s****
>
> ------------------------------------------------------------****
>
> Server listening on TCP port 5001****
>
> TCP window size: 85.3 KByte (default)****
>
> ------------------------------------------------------------****
>
> n[ 4] local 10.20.2.57 port 5001 connected with 10.20.2.53 port 51694****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 4] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec****
>
> ****
>
> root at hypervisor-1:~# iperf -c 10.20.2.57****
>
> ------------------------------------------------------------****
>
> Client connecting to 10.20.2.57, TCP port 5001****
>
> TCP window size: 22.9 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 10.20.2.53 port 51694 connected with 10.20.2.57 port 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec****
>
> ---****
>
> ****
>
> About Test #5, I don't know why the GRE traffic (Test #4) doesn't reach
> 1Gbit/sec (only ~200Mbit/s ?), since its physical path is much faster
> (GIGALan). Plus, Test #3 shows a pretty fast speed when traffic flows only
> within a hypervisor (3.96Gbit/sec).****
>
> ****
>
> Tomorrow, I'll do this tests with netperf.****
>
> ****
>
> NOTE: I'm using Open vSwitch 1.11.0, compiled for Ubuntu 12.04.3, via
> "dpkg-buildpackage" and installed via "Debian / Ubuntu way". If I
downgrade
> to 1.10.2 from Havana Cloud Archive, same results... I can downgrade it,
if
> you guys tell me to do so.****
>
> ****
>
> BTW, I'll install another "Region", based on Havana on Ubuntu 13.10, with
> exactly the same configurations from my current Havana + Ubuntu 12.04.3,
on
> top of the same hardware, to see if the problem still persist.****
>
> ****
>
> Regards,****
>
> Thiago****
>
> ****
>
> On 23 October 2013 22:40, Aaron Rosen <arosen at nicira.com> wrote:****
>
> ****
>
> ****
>
> On Mon, Oct 21, 2013 at 11:52 PM, Martinx - ????? <
> thiagocmartinsc at gmail.com> wrote:****
>
> James,****
>
> ****
>
> I think I'm hitting this problem.****
>
> ****
>
> I'm using "Per-Tenant Routers with Private Networks", GRE tunnels and
> L3+DHCP Network Node.****
>
> ****
>
> The connectivity from behind my Instances is very slow. It takes an
> eternity to finish "apt-get update".****
>
> ****
>
> ****
>
> I'm curious if you can do the following tests to help pinpoint the bottle
> neck: ****
>
> ****
>
> Run iperf or netperf between:****
>
> two instances on the same hypervisor - this will determine if it's a
> virtualization driver issue if the performance is bad. ****
>
> two instances on different hypervisors.****
>
> one instance to the namespace of the l3 agent. ****
>
> ****
>
> ****
>
> ****
>
> ****
>
> ****
>
> ****
>
> If I run "apt-get update" from within tenant's Namespace, it goes fine.***
> *
>
> ****
>
> If I enable "ovs_use_veth", Metadata (and/or DHCP) stops working and I and
> unable to start new Ubuntu Instances and login into them... Look:****
>
> ****
>
> --****
>
> cloud-init start running: Tue, 22 Oct 2013 05:57:39 +0000. up 4.01 seconds
> ****
>
> 2013-10-22 06:01:42,989 - util.py[WARNING]: '
> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]:
> url error [[Errno 113] No route to host]****
>
> 2013-10-22 06:01:45,988 - util.py[WARNING]: '
> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [6/120s]:
> url error [[Errno 113] No route to host]****
>
> --****
>
> ****
>
> ****
>
> Do you see anything interesting in the neutron-metadata-agent log? Or it
> looks like your instance doesn't have a route to the default gw? ****
>
> ****
>
> ****
>
> Is this problem still around?!****
>
> ****
>
> Should I stay away from GRE tunnels when with Havana + Ubuntu 12.04.3?****
>
> ****
>
> Is it possible to re-enable Metadata when ovs_use_veth = true ?****
>
> ****
>
> Thanks!****
>
> Thiago****
>
> ****
>
> On 3 October 2013 06:27, James Page <james.page at ubuntu.com> wrote:****
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256****
>
> On 02/10/13 22:49, James Page wrote:
> >> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
> >>> traceroute -n 10.5.0.2 -p 44444 --mtu traceroute to 10.5.0.2
> >>> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950
> >>> ms F=1500 0.598 ms 0.566 ms
> >>>
> >>> The PMTU from the l3 gateway to the instance looks OK to me.
> > I spent a bit more time debugging this; performance from within
> > the router netns on the L3 gateway node looks good in both
> > directions when accessing via the tenant network (10.5.0.2) over
> > the qr-XXXXX interface, but when accessing through the external
> > network from within the netns I see the same performance choke
> > upstream into the tenant network.
> >
> > Which would indicate that my problem lies somewhere around the
> > qg-XXXXX interface in the router netns - just trying to figure out
> > exactly what - maybe iptables is doing something wonky?****
>
> OK - I found a fix but I'm not sure why this makes a difference;
> neither my l3-agent or dhcp-agent configuration had 'ovs_use_veth =
> True'; I switched this on, clearing everything down, rebooted and now
> I seem symmetric good performance across all neutron routers.
>
> This would point to some sort of underlying bug when ovs_use_veth = False.
> ****
>
>
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.page at ubuntu.com
> jamespage at debian.org
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.14 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/****
>
> iQIcBAEBCAAGBQJSTTh6AAoJEL/srsug59jDmpEP/jaB5/yn9+Xm12XrVu0Q3IV5
> fLGOuBboUgykVVsfkWccI/oygNlBaXIcDuak/E4jxPcoRhLAdY1zpX8MQ8wSsGKd
> CjSeuW8xxnXubdfzmsCKSs3FCIBhDkSYzyiJd/raLvCfflyy8Cl7KN2x22mGHJ6z
> qZ9APcYfm9qCVbEssA3BHcUL+st1iqMJ0YhVZBk03+QEXaWu3FFbjpjwx3X1ZvV5
> Vbac7enqy7Lr4DSAIJVldeVuRURfv3YE3iJZTIXjaoUCCVTQLm5OmP9TrwBNHLsA
> 7W+LceQri+Vh0s4dHPKx5MiHsV3RCydcXkSQFYhx7390CXypMQ6WwXEY/a8Egssg
> SuxXByHwEcQFa+9sCwPQ+RXCmC0O6kUi8EPmwadjI5Gc1LoKw5Wov/SEen86fDUW
> P9pRXonseYyWN9I4MT4aG1ez8Dqq/SiZyWBHtcITxKI2smD92G9CwWGo4L9oGqJJ
> UcHRwQaTHgzy3yETPO25hjax8ZWZGNccHBixMCZKegr9p2dhR+7qF8G7mRtRQLxL
> 0fgOAExn/SX59ZT4RaYi9fI6Gng13RtSyI87CJC/50vfTmqoraUUK1aoSjIY4Dt+
> DYEMMLp205uLEj2IyaNTzykR0yh3t6dvfpCCcRA/xPT9slfa0a7P8LafyiWa4/5c
> jkJM4Y1BUV+2L5Rrf3sc
> =4lO4****
>
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack****
>
> ****
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack****
>
> ****
>
> ****
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack****
>
> ** **
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack****
>
> ** **
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131025/ec2d024
0/attachment-0001.html>
------------------------------
Message: 10
Date: Fri, 25 Oct 2013 10:54:24 +0530
From: Rajashree Thorat <rajashree.thorat16 at gmail.com>
To: openstack at lists.openstack.org
Subject: [Openstack] Test Mail
Message-ID:
<CACWqAd4wTprgURAnt9r3fwo_7jMt72ZYmXCR90DwBbj8EzykvA at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
Thanks & Regards
Rajshree Thorat
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131025/6e3db98
7/attachment-0001.html>
------------------------------
Message: 11
Date: Fri, 25 Oct 2013 13:33:19 +0800
From: ??? <dongjh at nci.com.cn>
To: openstack <openstack at lists.openstack.org>
Subject: [Openstack] The same IP configured on different compute nodes
Message-ID: <2013102513331733840027 at nci.com.cn>
Content-Type: text/plain; charset="gb2312"
Hi,
i have one controller and two compute nodes, running nova-network, i have
created the following vmnet.
# nova network-create vmnet --fixed-range-v4=192.168.11.0/24
--bridge-interface=br100 --multi-host=T --
gateway=192.168.11.253 --dns1=192.168.11.36 --dns2=192.168.11.24
when i start the Cirros instances, the vmnet gateway IP address
192.168.11.253 will be configured on the bridge br100 on each compute node,
however the VM's can comunicate with the external network. Will this be a
problem?
root at compute1:~# ifconfig br100
br100 Link encap:Ethernet HWaddr b8:ca:3a:ec:66:8c
inet addr:192.168.11.253 Bcast:192.168.11.255 Mask:255.255.255.0
inet6 addr: fe80::baca:3aff:feec:668c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9043 errors:0 dropped:0 overruns:0 frame:0
TX packets:20972 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:903366 (903.3 KB) TX bytes:2504357 (2.5 MB)
root at compute1:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 169.254.169.254/32 scope link lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br100
state UP qlen 1000
link/ether b8:ca:3a:ec:66:8c brd ff:ff:ff:ff:ff:ff
inet6 fe80::baca:3aff:feec:668c/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether b8:ca:3a:ec:66:8e brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether b8:ca:3a:ec:66:90 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether b8:ca:3a:ec:66:92 brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000
link/ether 00:0a:f7:24:1e:40 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.182/24 brd 10.10.10.255 scope global eth4
inet6 fe80::20a:f7ff:fe24:1e40/64 scope link
valid_lft forever preferred_lft forever
7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 00:0a:f7:24:1e:42 brd ff:ff:ff:ff:ff:ff
8: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether b8:ca:3a:ec:66:8c brd ff:ff:ff:ff:ff:ff
inet 192.168.11.253/24 brd 192.168.11.255 scope global br100
inet 192.168.11.182/24 brd 192.168.11.255 scope global secondary br100
inet6 fe80::baca:3aff:feec:668c/64 scope link
valid_lft forever preferred_lft forever
10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN
link/ether d6:bd:23:e0:86:1f brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
11: vlan100 at br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
master br100 state UP
link/ether fa:16:3e:94:98:cc brd ff:ff:ff:ff:ff:ff
inet6 fe80::f816:3eff:fe94:98cc/64 scope link
valid_lft forever preferred_lft forever
14: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master br100 state UNKNOWN qlen 500
link/ether fe:16:3e:e9:73:2b brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fee9:732b/64 scope link
valid_lft forever preferred_lft forever
15: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master br100 state UNKNOWN qlen 500
link/ether fe:16:3e:79:63:1a brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe79:631a/64 scope link
valid_lft forever preferred_lft forever
root at compute2:~# ifconfig br100
br100 Link encap:Ethernet HWaddr b8:ca:3a:ec:7b:3a
inet addr:192.168.11.253 Bcast:192.168.11.255 Mask:255.255.255.0
inet6 addr: fe80::baca:3aff:feec:7b3a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10420 errors:0 dropped:0 overruns:0 frame:0
TX packets:21246 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:997249 (997.2 KB) TX bytes:2545513 (2.5 MB)
root at compute2:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 169.254.169.254/32 scope link lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br100
state UP qlen 1000
link/ether b8:ca:3a:ec:7b:3a brd ff:ff:ff:ff:ff:ff
inet6 fe80::baca:3aff:feec:7b3a/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether b8:ca:3a:ec:7b:3c brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether b8:ca:3a:ec:7b:3e brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether b8:ca:3a:ec:7b:40 brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000
link/ether 00:0a:f7:24:25:a0 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.183/24 brd 10.10.10.255 scope global eth4
inet6 fe80::20a:f7ff:fe24:25a0/64 scope link
valid_lft forever preferred_lft forever
7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 00:0a:f7:24:25:a2 brd ff:ff:ff:ff:ff:ff
8: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether b8:ca:3a:ec:7b:3a brd ff:ff:ff:ff:ff:ff
inet 192.168.11.253/24 brd 192.168.11.255 scope global br100
inet 192.168.11.183/24 brd 192.168.11.255 scope global secondary br100
inet6 fe80::baca:3aff:feec:7b3a/64 scope link
valid_lft forever preferred_lft forever
10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN
link/ether 22:8b:bd:c8:52:90 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
11: vlan100 at br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
master br100 state UP
link/ether fa:16:3e:6c:5b:1f brd ff:ff:ff:ff:ff:ff
inet6 fe80::f816:3eff:fe6c:5b1f/64 scope link
valid_lft forever preferred_lft forever
13: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master br100 state UNKNOWN qlen 500
link/ether fe:16:3e:10:c8:29 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe10:c829/64 scope link
valid_lft forever preferred_lft forever
14: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master br100 state UNKNOWN qlen 500
link/ether fe:16:3e:7d:78:56 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe7d:7856/64 scope link
valid_lft forever preferred_lft forever
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131025/ffd0a2a
6/attachment-0001.html>
------------------------------
Message: 12
Date: Fri, 25 Oct 2013 00:11:04 -0700
From: Gary Kotton <gkotton at vmware.com>
To: Cristian Falcas <cristi.falcas at gmail.com>, openstack learner
<openstackleaner at gmail.com>
Cc: openstack <openstack at lists.openstack.org>
Subject: Re: [Openstack] Snapshot failure with VMwareVCDriver
Message-ID: <CE8FF409.1A762%gkotton at vmware.com>
Content-Type: text/plain; charset="us-ascii"
Hi,
Are you using the latest trunk version? Do you see any errors in the n-cpu
screen?
Thanks
Gary
On 10/24/13 10:55 PM, "Cristian Falcas" <cristi.falcas at gmail.com> wrote:
>Hy Xin,
>
>I was wandering if you could share your nova config? I'm trying to
>connect openstack with vsphere, but i keep getting errors regarding
>networking. I have no idea yet how to implement networking from
>openstack to vmware and I thought that maybe a working config could
>help me.
>
>Thank you,
>Cristian Falcas
>
>On Mon, Sep 23, 2013 at 11:47 PM, openstack learner
><openstackleaner at gmail.com> wrote:
>> Hi all,
>>
>> After git clone the newest devstack repository on sep23 2013 and
>>reinstall
>> devstack, when i did a snapshot it still failed.
>>
>> This time, from the vCenter task history i could see both the "Create
>> virtual machine snapshot" and the "Copy virtual disk" operation are
>> completed successfully. There was a "Copy virtual disk" operation
>>failure
>> before and now this issue is fixed in the new code i think.
>>
>> From the horizon, the new created snapshot image status was firstly
>>"queued"
>> and then "deleted". And the snapshot image is not in the list when i do
>>a
>> nova image-list, glance image-list or look at the horizon image&snapshot
>> list.
>>
>> From the screen "g-reg" log, I found it seems like the snapshot image
>>was
>> created and then deleted before glance upload the image.
>>
>> The output log is shown as followed:
>>
>> 2013-09-23 12:49:46.634 13386 INFO glance.registry.api.v1.images
>> [61f1aefc-e56b-481e-ac5d-23e5c8a1ab6c 1e1e314becc94d2ebe8246f0a36ca99a
>> 09ee20f776914ad7983bb2ace867623a] Successfully retrieved image
>> 66e47135-8576-49af-a474-47a11de0c46d
>> 2013-09-23 12:50:09.126 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Authenticating user token __call__
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>532
>> 2013-09-23 12:50:09.126 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Removing headers from request environment:
>>
>>X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X
>>-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Dom
>>ain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-
>>Tenant-Name,X-Tenant,X-Role
>> _remove_auth_headers
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>591
>> 2013-09-23 12:50:09.126 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Returning cached token a06afb4e1371592a52ee6cb53b0e2bae _cache_get
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>982
>> 2013-09-23 12:50:09.127 13386 DEBUG glance.api.policy [-] Loaded policy
>> rules: {u'context_is_admin': 'role:admin', u'default': '@',
>> u'manage_image_cache': 'role:admin'} load_rules
>> /opt/stack/glance/glance/api/policy.py:75
>> 2013-09-23 12:50:09.127 13386 DEBUG routes.middleware [-] Matched GET
>> /images/66e47135-8576-49af-a474-47a11de0c46d __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:100
>> 2013-09-23 12:50:09.127 13386 DEBUG routes.middleware [-] Route path:
>> '/images/{id}', defaults: {'action': u'show', 'controller':
>> <glance.common.wsgi.Resource object at 0x279ba50>} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:102
>> 2013-09-23 12:50:09.127 13386 DEBUG routes.middleware [-] Match dict:
>> {'action': u'show', 'controller': <glance.common.wsgi.Resource object at
>> 0x279ba50>, 'id': u'66e47135-8576-49af-a474-47a11de0c46d'} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:103
>> 2013-09-23 12:50:09.138 13386 INFO glance.registry.api.v1.images
>> [fa8c8425-0637-45cc-a80f-c188a787ad41 1e1e314becc94d2ebe8246f0a36ca99a
>> 09ee20f776914ad7983bb2ace867623a] Successfully retrieved image
>> 66e47135-8576-49af-a474-47a11de0c46d
>> 2013-09-23 12:50:34.130 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Authenticating user token __call__
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>532
>> 2013-09-23 12:50:34.131 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Removing headers from request environment:
>>
>>X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X
>>-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Dom
>>ain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-
>>Tenant-Name,X-Tenant,X-Role
>> _remove_auth_headers
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>591
>> 2013-09-23 12:50:34.131 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Returning cached token a06afb4e1371592a52ee6cb53b0e2bae _cache_get
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>982
>> 2013-09-23 12:50:34.131 13386 DEBUG glance.api.policy [-] Loaded policy
>> rules: {u'context_is_admin': 'role:admin', u'default': '@',
>> u'manage_image_cache': 'role:admin'} load_rules
>> /opt/stack/glance/glance/api/policy.py:75
>> 2013-09-23 12:50:34.132 13386 DEBUG routes.middleware [-] Matched GET
>> /images/66e47135-8576-49af-a474-47a11de0c46d __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:100
>> 2013-09-23 12:50:34.132 13386 DEBUG routes.middleware [-] Route path:
>> '/images/{id}', defaults: {'action': u'show', 'controller':
>> <glance.common.wsgi.Resource object at 0x279ba50>} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:102
>> 2013-09-23 12:50:34.132 13386 DEBUG routes.middleware [-] Match dict:
>> {'action': u'show', 'controller': <glance.common.wsgi.Resource object at
>> 0x279ba50>, 'id': u'66e47135-8576-49af-a474-47a11de0c46d'} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:103
>> 2013-09-23 12:50:34.139 13386 INFO glance.registry.api.v1.images
>> [5d0b9eec-b064-40b2-8db7-aeeb1f88c97e 1e1e314becc94d2ebe8246f0a36ca99a
>> 09ee20f776914ad7983bb2ace867623a] Successfully retrieved image
>> 66e47135-8576-49af-a474-47a11de0c46d
>> 2013-09-23 12:50:54.527 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Authenticating user token __call__
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>532
>> 2013-09-23 12:50:54.527 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Removing headers from request environment:
>>
>>X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X
>>-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Dom
>>ain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-
>>Tenant-Name,X-Tenant,X-Role
>> _remove_auth_headers
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>591
>> 2013-09-23 12:50:54.528 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Returning cached token a06afb4e1371592a52ee6cb53b0e2bae _cache_get
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>982
>> 2013-09-23 12:50:54.528 13386 DEBUG glance.api.policy [-] Loaded policy
>> rules: {u'context_is_admin': 'role:admin', u'default': '@',
>> u'manage_image_cache': 'role:admin'} load_rules
>> /opt/stack/glance/glance/api/policy.py:75
>> 2013-09-23 12:50:54.528 13386 DEBUG routes.middleware [-] Matched GET
>> /images/66e47135-8576-49af-a474-47a11de0c46d __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:100
>> 2013-09-23 12:50:54.529 13386 DEBUG routes.middleware [-] Route path:
>> '/images/{id}', defaults: {'action': u'show', 'controller':
>> <glance.common.wsgi.Resource object at 0x279ba50>} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:102
>> 2013-09-23 12:50:54.529 13386 DEBUG routes.middleware [-] Match dict:
>> {'action': u'show', 'controller': <glance.common.wsgi.Resource object at
>> 0x279ba50>, 'id': u'66e47135-8576-49af-a474-47a11de0c46d'} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:103
>> 2013-09-23 12:50:54.535 13386 INFO glance.registry.api.v1.images
>> [3681cb8b-f9b2-480e-b0b1-a05a08649701 1e1e314becc94d2ebe8246f0a36ca99a
>> 09ee20f776914ad7983bb2ace867623a] Successfully retrieved image
>> 66e47135-8576-49af-a474-47a11de0c46d
>> 2013-09-23 12:50:54.538 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Authenticating user token __call__
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>532
>> 2013-09-23 12:50:54.538 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Removing headers from request environment:
>>
>>X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X
>>-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Dom
>>ain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-
>>Tenant-Name,X-Tenant,X-Role
>> _remove_auth_headers
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>591
>> 2013-09-23 12:50:54.538 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Returning cached token a06afb4e1371592a52ee6cb53b0e2bae _cache_get
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>982
>> 2013-09-23 12:50:54.539 13386 DEBUG glance.api.policy [-] Loaded policy
>> rules: {u'context_is_admin': 'role:admin', u'default': '@',
>> u'manage_image_cache': 'role:admin'} load_rules
>> /opt/stack/glance/glance/api/policy.py:75
>> 2013-09-23 12:50:54.539 13386 DEBUG routes.middleware [-] Matched PUT
>> /images/66e47135-8576-49af-a474-47a11de0c46d __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:100
>> 2013-09-23 12:50:54.539 13386 DEBUG routes.middleware [-] Route path:
>> '/images/{id}', defaults: {'action': u'update', 'controller':
>> <glance.common.wsgi.Resource object at 0x279ba50>} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:102
>> 2013-09-23 12:50:54.540 13386 DEBUG routes.middleware [-] Match dict:
>> {'action': u'update', 'controller': <glance.common.wsgi.Resource object
>>at
>> 0x279ba50>, 'id': u'66e47135-8576-49af-a474-47a11de0c46d'} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:103
>> 2013-09-23 12:50:54.540 13386 DEBUG glance.registry.api.v1.images
>> [a5473e1c-63d4-447e-9ebf-363f4176bebc 1e1e314becc94d2ebe8246f0a36ca99a
>> 09ee20f776914ad7983bb2ace867623a] Updating image
>> 66e47135-8576-49af-a474-47a11de0c46d with metadata: {u'status':
>>u'deleted'}
>> update /opt/stack/glance/glance/registry/api/v1/images.py:436
>> 2013-09-23 12:50:54.555 13386 INFO glance.registry.api.v1.images
>> [a5473e1c-63d4-447e-9ebf-363f4176bebc 1e1e314becc94d2ebe8246f0a36ca99a
>> 09ee20f776914ad7983bb2ace867623a] Updating metadata for image
>> 66e47135-8576-49af-a474-47a11de0c46d
>> 2013-09-23 12:50:54.558 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Authenticating user token __call__
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>532
>> 2013-09-23 12:50:54.558 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Removing headers from request environment:
>>
>>X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X
>>-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Dom
>>ain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-
>>Tenant-Name,X-Tenant,X-Role
>> _remove_auth_headers
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>591
>> 2013-09-23 12:50:54.558 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Returning cached token a06afb4e1371592a52ee6cb53b0e2bae _cache_get
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>982
>> 2013-09-23 12:50:54.559 13386 DEBUG glance.api.policy [-] Loaded policy
>> rules: {u'context_is_admin': 'role:admin', u'default': '@',
>> u'manage_image_cache': 'role:admin'} load_rules
>> /opt/stack/glance/glance/api/policy.py:75
>> 2013-09-23 12:50:54.559 13386 DEBUG routes.middleware [-] Matched DELETE
>> /images/66e47135-8576-49af-a474-47a11de0c46d __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:100
>> 2013-09-23 12:50:54.559 13386 DEBUG routes.middleware [-] Route path:
>> '/images/{id}', defaults: {'action': u'delete', 'controller':
>> <glance.common.wsgi.Resource object at 0x279ba50>} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:102
>> 2013-09-23 12:50:54.559 13386 DEBUG routes.middleware [-] Match dict:
>> {'action': u'delete', 'controller': <glance.common.wsgi.Resource object
>>at
>> 0x279ba50>, 'id': u'66e47135-8576-49af-a474-47a11de0c46d'} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:103
>> 2013-09-23 12:50:54.591 13386 INFO glance.registry.api.v1.images
>> [681e3d40-ecaa-42be-beb8-fc972a2997a5 1e1e314becc94d2ebe8246f0a36ca99a
>> 09ee20f776914ad7983bb2ace867623a] Successfully deleted image
>> 66e47135-8576-49af-a474-47a11de0c46d
>> 2013-09-23 12:51:01.650 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Authenticating user token __call__
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>532
>> 2013-09-23 12:51:01.650 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Removing headers from request environment:
>>
>>X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X
>>-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Dom
>>ain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-
>>Tenant-Name,X-Tenant,X-Role
>> _remove_auth_headers
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>591
>> 2013-09-23 12:51:01.651 13386 INFO
>>requests.packages.urllib3.connectionpool
>> [-] Starting new HTTP connection (1): 172.20.239.92
>> 2013-09-23 12:51:01.664 13386 DEBUG
>>requests.packages.urllib3.connectionpool
>> [-] "GET /v2.0/tokens/a06afb4e1371592a52ee6cb53b0e2bae HTTP/1.1" 200
>>5371
>> _make_request
>>
>>/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connecti
>>onpool.py:296
>> 2013-09-23 12:51:01.664 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Storing a06afb4e1371592a52ee6cb53b0e2bae token in memcache _cache_put
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>1042
>> 2013-09-23 12:51:01.665 13386 DEBUG glance.api.policy [-] Loaded policy
>> rules: {u'context_is_admin': 'role:admin', u'default': '@',
>> u'manage_image_cache': 'role:admin'} load_rules
>> /opt/stack/glance/glance/api/policy.py:75
>> 2013-09-23 12:51:01.665 13386 DEBUG routes.middleware [-] Matched GET
>> /images/66e47135-8576-49af-a474-47a11de0c46d __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:100
>> 2013-09-23 12:51:01.665 13386 DEBUG routes.middleware [-] Route path:
>> '/images/{id}', defaults: {'action': u'show', 'controller':
>> <glance.common.wsgi.Resource object at 0x279ba50>} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:102
>> 2013-09-23 12:51:01.665 13386 DEBUG routes.middleware [-] Match dict:
>> {'action': u'show', 'controller': <glance.common.wsgi.Resource object at
>> 0x279ba50>, 'id': u'66e47135-8576-49af-a474-47a11de0c46d'} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:103
>> 2013-09-23 12:51:01.671 13386 DEBUG glance.db.sqlalchemy.api
>> [94099a83-19e1-40c1-b592-dd61013caf7a 1e1e314becc94d2ebe8246f0a36ca99a
>> 09ee20f776914ad7983bb2ace867623a] No image found with ID
>> 66e47135-8576-49af-a474-47a11de0c46d _image_get
>> /opt/stack/glance/glance/db/sqlalchemy/api.py:334
>> 2013-09-23 12:51:01.671 13386 INFO glance.registry.api.v1.images
>> [94099a83-19e1-40c1-b592-dd61013caf7a 1e1e314becc94d2ebe8246f0a36ca99a
>> 09ee20f776914ad7983bb2ace867623a] Image
>>66e47135-8576-49af-a474-47a11de0c46d
>> not found
>> 2013-09-23 12:57:51.769 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Authenticating user token __call__
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>532
>> 2013-09-23 12:57:51.769 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Removing headers from request environment:
>>
>>X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X
>>-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Dom
>>ain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-
>>Tenant-Name,X-Tenant,X-Role
>> _remove_auth_headers
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>591
>> 2013-09-23 12:57:51.770 13386 INFO
>>requests.packages.urllib3.connectionpool
>> [-] Starting new HTTP connection (1): 172.20.239.92
>> 2013-09-23 12:57:51.797 13386 DEBUG
>>requests.packages.urllib3.connectionpool
>> [-] "GET /v2.0/tokens/revoked HTTP/1.1" 200 794 _make_request
>>
>>/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connecti
>>onpool.py:296
>> 2013-09-23 12:57:51.809 13386 DEBUG
>>keystoneclient.middleware.auth_token [-]
>> Storing e8ba2019300908551e0c6ccb766af4e6 token in memcache _cache_put
>>
>>/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:
>>1042
>> 2013-09-23 12:57:51.811 13386 DEBUG glance.api.policy [-] Loaded policy
>> rules: {u'context_is_admin': 'role:admin', u'default': '@',
>> u'manage_image_cache': 'role:admin'} load_rules
>> /opt/stack/glance/glance/api/policy.py:75
>> 2013-09-23 12:57:51.811 13386 DEBUG routes.middleware [-] Matched GET
>> /images/detail __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:100
>> 2013-09-23 12:57:51.811 13386 DEBUG routes.middleware [-] Route path:
>> '/images/detail', defaults: {'action': u'detail', 'controller':
>> <glance.common.wsgi.Resource object at 0x279ba50>} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:102
>> 2013-09-23 12:57:51.811 13386 DEBUG routes.middleware [-] Match dict:
>> {'action': u'detail', 'controller': <glance.common.wsgi.Resource object
>>at
>> 0x279ba50>} __call__
>> /usr/lib/python2.7/dist-packages/routes/middleware.py:103
>> 2013-09-23 12:57:51.833 13386 INFO glance.registry.api.v1.images
>> [8fe70cc1-bee3-4629-91fa-0f6aa0acbcc4 1e1e314becc94d2ebe8246f0a36ca99a
>> 09ee20f776914ad7983bb2ace867623a] Returning detailed image list
>>
>>
>>
>> Any idea about what caused the problem? Does it mean that the bug has
>>not
>> been totally fixed, anything wrong with my devstack setting or openstack
>> service?
>>
>> Thanks
>> xin
>>
>>
>>
>> On Sun, Sep 22, 2013 at 11:05 PM, Chinmaya Bharadwaj A
>> <acbharadwaj at hotmail.com> wrote:
>>>
>>> there is a bug filed, but supposed to be fixed by now.
>>> https://bugs.launchpad.net/nova/+bug/1184807
>>> just check you have these changes
>>> https://review.openstack.org/#/c/40298/18
>>>
>>> regards
>>> Chinmay
>>>
>>> ________________________________
>>> Date: Sun, 22 Sep 2013 19:02:02 -0700
>>> From: openstackleaner at gmail.com
>>> To: openstack at lists.openstack.org
>>>
>>> Subject: [Openstack] Snapshot failure with VMwareVCDriver
>>>
>>>
>>>
>>> I am unable to get snapshots working in my devstack setup with the
>>> VCDriver.
>>>
>>> When I tried to snapshot a vm instance, in the glance image-list, i
>>>could
>>> firstly see a new image was create and the status was "saving", few
>>>seconds
>>> later the new created image entry disappeared from the image list and
>>>when i
>>> check the vCenter, in the task history i found the "Create virtual
>>>machine
>>> snapshot" operation is completed but there is an error of the "Copy
>>>virtual
>>> disk" operation: "The requested operation is not implemented by the
>>> server."
>>>
>>> Anyone has the same problem and know how to solve it? Any idea about
>>>what
>>> caused the problem? Is this a bug of vmware api with the devstack?
>>>
>>> Thanks
>>>
>>> xin
>>>
>>> _______________________________________________ Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to :
>>> openstack at lists.openstack.org Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>>
>> _______________________________________________
>> Mailing list:
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack at lists.openstack.org
>> Unsubscribe :
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
------------------------------
Message: 13
Date: Fri, 25 Oct 2013 15:33:39 +0800
From: ??? <dongjh at nci.com.cn>
To: openstack <openstack at lists.openstack.org>
Subject: [Openstack] The dashboard crash after attaching a volume to a
VM
Message-ID: <201310251533381046154 at nci.com.cn>
Content-Type: text/plain; charset="gb2312"
Hello,
I just installed and configured the cinder services on the controller node.
After attaching a volume to a instance, i clicked the link of the instance
name, the dashborad ceashed, i got the below error messages, however i can
click other instances without issue. Have anybody encountered this ?
Something went wrong!
An unexpected error has occurred. Try refreshing the page. If that doesn't
help, contact your local administrator.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131025/fd08438
3/attachment-0001.html>
------------------------------
Message: 14
Date: Fri, 25 Oct 2013 09:14:50 +0100
From: "Daniel P. Berrange" <berrange at redhat.com>
To: "Craig E. Ward" <cward at isi.edu>
Cc: OpenStack Mailing List <openstack at lists.openstack.org>
Subject: Re: [Openstack] libvirtd and Folsom Quantum/Neutron and
iptables
Message-ID: <20131025081450.GA27738 at redhat.com>
Content-Type: text/plain; charset=utf-8
On Thu, Oct 24, 2013 at 04:02:04PM -0700, Craig E. Ward wrote:
> I have a Folsom installation that has re-tasked some of the host
> hardware. What was a nova compute node is now a Quantum (agent)
> node. In the conversion, the libvirtd service was not removed. It
> looks like it could be cause some issues with the iptables rules.
>
> Will libvirtd insert rules that may conflict with the rules inserted
> by Quautum? Or do I need to look elsewhere for conflicts?
It depends on the installation - if the libvirt default network is
present (eg virbr0 bridge device), then there will be a few iptables
rules present. I don't know if those will conflict with openstack
or not. A 'virsh net-destroy default' 'virsh net-autostart --disable
default'
will remove the network libvirt has (if present).
> Is there any reason to leave libvirtd running on a Folsom Quantum node?
No, it is only required where 'nova-compute' is running.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/
:|
|: http://libvirt.org -o- http://virt-manager.org
:|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/
:|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc
:|
------------------------------
Message: 15
Date: Fri, 25 Oct 2013 16:14:32 +0800
From: ??? <dongjh at nci.com.cn>
To: openstack <openstack at lists.openstack.org>
Subject: [Openstack] ??: The dashboard crash after attaching a volume
to a VM
Message-ID: <201310251607459107964 at nci.com.cn>
Content-Type: text/plain; charset="gb2312"
Adding comment.
This is the apache error log.
root at controller:/var/log/apache2# nova volume-attach test1
81554eb2-69c4-45c3-9886-21369a5238d7 /dev/vdb
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| serverId | f2905850-d84f-4792-a868-b5018d379c9d |
| id | 81554eb2-69c4-45c3-9886-21369a5238d7 |
| volumeId | 81554eb2-69c4-45c3-9886-21369a5238d7 |
+----------+--------------------------------------+
root at controller:/var/log/apache2# tail -f error.log
[Fri Oct 25 08:02:37 2013] [error] Internal Server Error:
/horizon/project/instances/f2905850-d84f-4792-a868-b5018d379c9d/
[Fri Oct 25 08:02:37 2013] [error] Traceback (most recent call last):
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 115,
in get_response
[Fri Oct 25 08:02:37 2013] [error] response = callback(request,
*callback_args, **callback_kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in dec
[Fri Oct 25 08:02:37 2013] [error] return view_func(request, *args,
**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 54, in dec
[Fri Oct 25 08:02:37 2013] [error] return view_func(request, *args,
**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in dec
[Fri Oct 25 08:02:37 2013] [error] return view_func(request, *args,
**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 68, in
view
[Fri Oct 25 08:02:37 2013] [error] return self.dispatch(request, *args,
**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 86, in
dispatch
[Fri Oct 25 08:02:37 2013] [error] return handler(request, *args,
**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 60, in get
[Fri Oct 25 08:02:37 2013] [error] context =
self.get_context_data(**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_das
hboard/dashboards/project/instances/views.py", line 212, in get_context_data
[Fri Oct 25 08:02:37 2013] [error] context = super(DetailView,
self).get_context_data(**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 44, in
get_context_data
[Fri Oct 25 08:02:37 2013] [error] exceptions.handle(self.request)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 39, in
get_context_data
[Fri Oct 25 08:02:37 2013] [error] tab_group =
self.get_tabs(self.request, **kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_das
hboard/dashboards/project/instances/views.py", line 239, in get_tabs
[Fri Oct 25 08:02:37 2013] [error] instance = self.get_data()
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_das
hboard/dashboards/project/instances/views.py", line 234, in get_data
[Fri Oct 25 08:02:37 2013] [error] redirect=redirect)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_das
hboard/dashboards/project/instances/views.py", line 222, in get_data
[Fri Oct 25 08:02:37 2013] [error] instance_id)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_das
hboard/api/nova.py", line 637, in instance_volumes_list
[Fri Oct 25 08:02:37 2013] [error] volume_data =
cinderclient(request).volumes.get(volume.id)
[Fri Oct 25 08:02:37 2013] [error] AttributeError: 'NoneType' object has no
attribute 'volumes'
???? ???
????? 2013-10-25 15:33
???? openstack
??? The dashboard crash after attaching a volume to a VM
Hello,
I just installed and configured the cinder services on the controller node.
After attaching a volume to a instance, i clicked the link of the instance
name, the dashborad ceashed, i got the below error messages, however i can
click other instances without issue. Have anybody encountered this ?
Something went wrong!
An unexpected error has occurred. Try refreshing the page. If that doesn't
help, contact your local administrator.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131025/db9d6d0
e/attachment-0001.html>
------------------------------
Message: 16
Date: Fri, 25 Oct 2013 08:39:51 +0000
From: Lingala Srikanth Kumar-B37208 <B37208 at freescale.com>
To: ??? <dongjh at nci.com.cn>, openstack <openstack at lists.openstack.org>
Subject: Re: [Openstack] ??: The dashboard crash after attaching a
volume to a VM
Message-ID:
<6A9B6FB9B9BD9641A7471ECB694DD6759B3DFD at 039-SN1MPN1-005.039d.mgd.msft.net>
Content-Type: text/plain; charset="gb2312"
Hi,
Please check your ?/etc/nova/nova.conf?. After installing cinder, you need
to configure some cinder settings in nova.conf.
Please follow the below link to install and configure cinder in controller
node:
https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_Sing
leNode/OpenStack_Grizzly_Install_Guide.rst
Regards,
Srikanth.
From: ??? [mailto:dongjh at nci.com.cn]
Sent: Friday, October 25, 2013 1:45 PM
To: openstack
Subject: [Openstack] ??: The dashboard crash after attaching a volume to a
VM
Adding comment.
This is the apache error log.
root at controller:/var/log/apache2# nova volume-attach test1
81554eb2-69c4-45c3-9886-21369a5238d7 /dev/vdb
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| serverId | f2905850-d84f-4792-a868-b5018d379c9d |
| id | 81554eb2-69c4-45c3-9886-21369a5238d7 |
| volumeId | 81554eb2-69c4-45c3-9886-21369a5238d7 |
+----------+--------------------------------------+
root at controller:/var/log/apache2# tail -f error.log
[Fri Oct 25 08:02:37 2013] [error] Internal Server Error:
/horizon/project/instances/f2905850-d84f-4792-a868-b5018d379c9d/
[Fri Oct 25 08:02:37 2013] [error] Traceback (most recent call last):
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 115,
in get_response
[Fri Oct 25 08:02:37 2013] [error] response = callback(request,
*callback_args, **callback_kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in dec
[Fri Oct 25 08:02:37 2013] [error] return view_func(request, *args,
**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 54, in dec
[Fri Oct 25 08:02:37 2013] [error] return view_func(request, *args,
**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in dec
[Fri Oct 25 08:02:37 2013] [error] return view_func(request, *args,
**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 68, in
view
[Fri Oct 25 08:02:37 2013] [error] return self.dispatch(request, *args,
**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 86, in
dispatch
[Fri Oct 25 08:02:37 2013] [error] return handler(request, *args,
**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 60, in get
[Fri Oct 25 08:02:37 2013] [error] context =
self.get_context_data(**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_das
hboard/dashboards/project/instances/views.py", line 212, in get_context_data
[Fri Oct 25 08:02:37 2013] [error] context = super(DetailView,
self).get_context_data(**kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 44, in
get_context_data
[Fri Oct 25 08:02:37 2013] [error] exceptions.handle(self.request)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 39, in
get_context_data
[Fri Oct 25 08:02:37 2013] [error] tab_group =
self.get_tabs(self.request, **kwargs)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_das
hboard/dashboards/project/instances/views.py", line 239, in get_tabs
[Fri Oct 25 08:02:37 2013] [error] instance = self.get_data()
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_das
hboard/dashboards/project/instances/views.py", line 234, in get_data
[Fri Oct 25 08:02:37 2013] [error] redirect=redirect)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_das
hboard/dashboards/project/instances/views.py", line 222, in get_data
[Fri Oct 25 08:02:37 2013] [error] instance_id)
[Fri Oct 25 08:02:37 2013] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_das
hboard/api/nova.py", line 637, in instance_volumes_list
[Fri Oct 25 08:02:37 2013] [error] volume_data =
cinderclient(request).volumes.get(volume.id)
[Fri Oct 25 08:02:37 2013] [error] AttributeError: 'NoneType' object has no
attribute 'volumes'
???? ???<mailto:dongjh at nci.com.cn>
????? 2013-10-25 15:33
???? openstack<mailto:openstack at lists.openstack.org>
??? The dashboard crash after attaching a volume to a VM
Hello,
I just installed and configured the cinder services on the controller node.
After attaching a volume to a instance, i clicked the link of the instance
name, the dashborad ceashed, i got the below error messages, however i can
click other instances without issue. Have anybody encountered this ?
Something went wrong!
An unexpected error has occurred. Try refreshing the page. If that doesn't
help, contact your local administrator.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131025/54c032e
6/attachment-0001.html>
------------------------------
Message: 17
Date: Fri, 25 Oct 2013 10:56:36 +0200
From: Alexander Stellwag <openstack at stellwag.net>
To: Joshua Harlow <harlowja at yahoo-inc.com>, Tim Bell
<Tim.Bell at cern.ch>, "openstack at lists.openstack.org"
<openstack at lists.openstack.org>
Subject: Re: [Openstack] Migrate instances/tenants between clouds
Message-ID: <526A3244.7080308 at stellwag.net>
Content-Type: text/plain; charset="iso-8859-1"
Joshua,
On 24.10.2013 21:48, Joshua Harlow wrote:
> Whatever happened to doing in-place upgrades?
>
> Has that been problematic for u, are people just not doing it? If
> they are avoiding it, why?
If it was possible for us, then I would be the first to do it.
Unfortunately it's not that easy because we're also changing the
complete infrastructure (most notably going from nova-network to
neutron/ovs)
> Shouldn't just going from folsom->havana work, if not, why not; it
> worries me that this isn't priority #0 if it doesn't work.
It works as long as you stick with your old network layout. I do
sincerly hope that this is the only migration we must to "the hard way".
Cheers,
Alex
> On 10/24/13 10:51 AM, "Tim Bell" <Tim.Bell at cern.ch> wrote:
>
>>
>> Have you tried
>>
>> - snapshot - glance download of the snapshot - glance upload of the
>> snapshot to new instance - boot from snapshot
>>
>> The process we use at CERN is documented at
>>
http://information-technology.web.cern.ch/book/cern-cloud-infrastructure-u
>>
>>
ser-guide/images/migrating-using-images
>>
>> This could be a good technique to document in the standard CERN
>> openstack user guide.
>>
>> BTW, this does increase the storage in your glance server since
>> you'll loose any commonality between multiple images. So, make sure
>> you've lots of space on Glance.
>>
>> Tim
>>
>>> -----Original Message----- From: Alexander Stellwag
>>> [mailto:openstack at stellwag.net] Sent: 24 October 2013 16:53 To:
>>> openstack at lists.openstack.org Subject: [Openstack] Migrate
>>> instances/tenants between clouds
>>>
>>> Hi stackers,
>>>
>>> we're looking for a tool / script / blueprint to migrate
>>> instances or even complete tenants between multiple installations
>>> of OpenStack (possibly running different versions).
>>>
>>> I searched around the net but didn't find anything appropriate.
>>> Is any of you aware of such a tool?
>>>
>>> The current use-case is a migration from a folsom/nova-network
>>> based installation into our new havana/neutron based cloud. It is
>>> not necessary to migrate instances and volumes online but it
>>> should work at least semi-automatically to make it usable in
>>> large deployments.
>>>
>>> Any hints would be greatly appreciated.
>>>
>>> Cheers, Alex -- Alexander Stellwag Deutsche Telekom AG Products &
>>> Innovation Infrastructure Design
>>
>>
>> _______________________________________________ Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post
>> to : openstack at lists.openstack.org Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
--
Alexander Stellwag
Deutsche Telekom AG Products & Innovation
Infrastructure Design
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 555 bytes
Desc: OpenPGP digital signature
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131025/305fb7d
b/attachment-0001.pgp>
------------------------------
Message: 18
Date: Fri, 25 Oct 2013 11:03:59 +0200
From: Alexander Stellwag <openstack at stellwag.net>
To: Tim Bell <Tim.Bell at cern.ch>, "openstack at lists.openstack.org"
<openstack at lists.openstack.org>
Subject: Re: [Openstack] Migrate instances/tenants between clouds
Message-ID: <526A33FF.4020403 at stellwag.net>
Content-Type: text/plain; charset="iso-8859-1"
Tim,
On 24.10.2013 19:51, Tim Bell wrote:
> Have you tried
>
> - snapshot
> - glance download of the snapshot
> - glance upload of the snapshot to new instance
> - boot from snapshot
thank you very much. This is working fine but unfortunately not enough
for us. This way, we loose all the metadata of both the instance and the
template:
- security groups
- SSH keys
- attached volumes
- associated fixed and floating IP addresses.
The idea is to have the VMs in the new environment appear exactly as
they did in the old one, so simply moving a snapshot between the clouds
does not work :(
We'll probably end up writing a migration tool to achieve our goals and
I' pretty confident we'll be able to make it publicly available vo github.
Cheers,
Alex
--
Alexander Stellwag
Deutsche Telekom AG Products & Innovation
Infrastructure Design
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 555 bytes
Desc: OpenPGP digital signature
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131025/5628944
d/attachment-0001.pgp>
------------------------------
Message: 19
Date: Fri, 25 Oct 2013 10:15:47 +0100 (BST)
From: Darragh O'Reilly <dara2002-openstack at yahoo.com>
To: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
Subject: Re: [Openstack] Directional network performance issues with
Neutron + OpenvSwitch
Message-ID:
<1382692547.19002.YahooMailNeo at web172403.mail.ir2.yahoo.com>
Content-Type: text/plain; charset=iso-8859-1
Hi Thiago,
you have configured DHCP to push out a MTU of 1400. Can you confirm that the
1400 MTU is actually getting out to the instances by running 'ip link' on
them?
There is an open problem where the veth used to connect the OVS and Linux
bridges causes a performance drop on some kernels -
https://bugs.launchpad.net/nova-project/+bug/1223267 .? If you are using the
LibvirtHybridOVSBridgeDriver VIF driver, can you try changing to
LibvirtOpenVswitchDriver and repeat the iperf test between instances on
different compute-nodes.
What NICs (maker+model) are you using? You could try disabling any off-load
functionality - 'ethtool -k <iface-used-for-gre>'.
What kernal are you using: 'uname -a'?
Re, Darragh.
> Hi Daniel,
>
> I followed that page, my Instances MTU is lowered by DHCP Agent but, same
> result: poor network performance (internal between Instances and when
> trying to reach the Internet).
>
> No matter if I use "dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf
+
> "dhcp-option-force=26,1400"" for my Neutron DHCP agent, or not (i.e. MTU =
> 1500), the result is almost the same.
>
> I'll try VXLAN (or just VLANs) this weekend to see if I can get better
> results...
>
> Thanks!
> Thiago
------------------------------
Message: 20
Date: Fri, 25 Oct 2013 16:49:14 +0530
From: ankush grover <ankushcentos at gmail.com>
To: openstack <openstack at lists.openstack.org>
Subject: [Openstack] Doc or Link for enabling Active Directory
authentication for login information in Havana
Message-ID:
<CACe638RjHT46__ht8gMnUm+WCaNfOwHRjfc8yRZKCCK01+zbDg at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
Hi All,
I have setup Openstack Havana release through Redhat RDO utility. Openstack
is working fine but I am not able to configure Active Directory
authentication for login information only in Openstack. In Havana Release
Notes it is mentioned. I am looking forward to do the same in Openstack
Havana release but I am not able to find any doc or link for the same.
you can now tie login information to your corporate LDAP (or Active
Directory) server, while having role and group management handled on the
OpenStack SQL server.
Can someone share the link or doc for doing the same? Do let me know if you
need any further information.
Thanks & Regards
Ankush
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openstack.org/pipermail/openstack/attachments/20131025/ec12e70
4/attachment-0001.html>
------------------------------
_______________________________________________
Openstack mailing list
openstack at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
End of Openstack Digest, Vol 4, Issue 29
****************************************
More information about the Openstack
mailing list