[Openstack-operators] Using metadata with quantum

Juan José Pavlik Salles jjpavlik at gmail.com
Thu Aug 22 13:05:56 UTC 2013


Great link Lorin, thanks! I'll try Darragh's ideas today.


2013/8/21 Lorin Hochstein <lorin at nimbisservices.com>

>  Juan:
>
>  Check out this blog post from Darragh O'Reilly about how to configure
> access to the metadata service when not using the l3-agent:
>
>
> http://techbackground.blogspot.com/2013/06/metadata-via-dhcp-namespace.html?m=1
>
>
>
>  Lorin
>> Sent from Mailbox <https://www.dropbox.com/mailbox> for iPhone
>
> On Wed, Aug 21, 2013 at 4:56 PM, Juan José Pavlik Salles <**
> jjpavlik at gmail.com**="mailto:jjpavlik at gmail.com">> wrote:
>
>> Hi guys, i'm trying to get my metadata working with quantum. I don't have
>> L3-agent and i'm using provider network with ovs and vlans. So far i have
>> quantum-dhcp working great and my VMs get IP with no problems, but during
>> boot they get stuck trying to access to the metadata service.
>>
>> Is it possible to access to the metadata service without using
>> quantum-l3-agent?
>>
>> My network node runs quantum-server, quantum-dhcp-agent,
>> quantum-openvswitch-agent and quantum-metadata-service. I've configured
>> metada_agent like this:
>>
>> [DEFAULT]
>> # Show debugging output in log (sets DEBUG log level output)
>> debug = True
>> verbose = True
>>
>> # The Quantum user information for accessing the Quantum API.
>> auth_url = http://172.19.136.1:35357/v2.0
>> auth_region = RegionOne
>> admin_tenant_name = service
>> admin_user = quantum
>> admin_password = yedAA567
>>
>> # IP address used by Nova metadata server
>> ####cambiar esto!!! a una IP balanceada, posiblemente 172.19.136.1
>> nova_metadata_ip = 172.19.136.12
>>
>> # TCP Port used by Nova metadata server
>> nova_metadata_port = 8775
>>
>> # When proxying metadata requests, Quantum signs the Instance-ID header
>> with a
>> # shared secret to prevent spoofing.  You may select any string for a
>> secret,
>> # but it must match here and in the configuration used by the Nova
>> Metadata
>> # Server. NOTE: Nova uses a different key:
>> quantum_metadata_proxy_shared_secret
>> # metadata_proxy_shared_secret =
>> metadata_proxy_shared_secret = XXXX
>>
>> What else should i do? I still get
>>
>>  20130821 20:29:36,057  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [22/120s]:
>> http error [504]
>> 20130821 20:29:45,117  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [31/120s]:
>> http error [504]
>> 20130821 20:29:55,181  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [41/120s]:
>> http error [504]
>> 20130821 20:30:04,241  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [50/120s]:
>> http error [504]
>> 20130821 20:30:14,308  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [60/120s]:
>> http error [504]
>> 20130821 20:30:23,369  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [69/120s]:
>> http error [504]
>> 20130821 20:30:34,436  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [80/120s]:
>> http error [504]
>> 20130821 20:30:43,505  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [89/120s]:
>> http error [504]
>> 20130821 20:30:54,577  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [101/120s]:
>> http error [504]
>> 20130821 20:31:03,636  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [110/120s]:
>> http error [504]
>> 20130821 20:31:12,648  util.py[WARNING]: '
>> http://169.254.169.254/20090404/metadata/instanceid' failed [119/120s]:
>> socket timeout [timed out]
>>
>> during boots.
>>
>> Somehow my network node should take this requests to 169.254.169.254, i
>> have no idea how to do that.
>>
>> Thanks!
>>
>> --
>> Pavlik Salles Juan José
>>
>


-- 
Pavlik Salles Juan José
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130822/2c4f9b6f/attachment.html>


More information about the OpenStack-operators mailing list