[Openstack-operators] IPV6 help liberty

suresh kumar boilingbabu at gmail.com
Wed Jul 13 16:01:09 UTC 2016


Changed to SLAAC, instance failed to get metadata information and in logs I
don't see ip assigned to the instances, but when I logged into instance I
see ipv6 address.

Do I need to update any metadata configuration?

Cloud-init v. 0.7.5 running 'init' at Wed, 13 Jul 2016 15:49:44 +0000.
Up 134.02 seconds.
ci-info: +++++++++++++++++++++++Net device info+++++++++++++++++++++++
ci-info: +--------+------+-----------+-----------+-------------------+
ci-info: | Device |  Up  |  Address  |    Mask   |     Hw-Address    |
ci-info: +--------+------+-----------+-----------+-------------------+
ci-info: |   lo   | True | 127.0.0.1 | 255.0.0.0 |         .         |
ci-info: |  eth0  | True |     .     |     .     | fa:16:3e:eb:f5:cf |
ci-info: +--------+------+-----------+-----------+-------------------+
ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info
failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2016-07-13 15:49:45,127 - url_helper.py[WARNING]: Calling
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[0/120s]: request error [HTTPConnectionPool(host='169.254.169.254',
port=80): Max retries exceeded with url:
/2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>:
[Errno 101] Network is unreachable)]
2016-07-13 15:49:46,130 - url_helper.py[WARNING]: Calling
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[1/120s]: request error [HTTPConnectionPool(host='169.254.169.254',
port=80): Max retries exceeded with url:
/2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>:
[Errno 101] Network is unreachable)]
2016-07-13 15:49:47,137 - url_helper.py[WARNING]: Calling
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[2/120s]: request error [HTTPConnectionPool(host='169.254.169.254',
port=80): Max retries exceeded with url:
/2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>:
[Errno 101] Network is unreachable)]
2016-07-13 15:49:48,143 - url_helper.py[WARNING]: Calling
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[3/120s]: request error [HTTPConnectionPool(host='169.254.169.254',
port=80): Max retries exceeded with url:
/2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>:
[Errno 101] Network is unreachable)]
2016-07-13 15:49:49,146 - url_helper.py[WARNING]: Calling
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[4/120s]: request error [HTTPConnectionPool(host='169.254.169.254',
port=80): Max retries exceeded with url:
/2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>:
[Errno 101] Network is unreachable)]


On Wed, Jul 13, 2016 at 10:43 AM, suresh kumar <boilingbabu at gmail.com>
wrote:

> Thanks for the reply Jens,
>
> What configuration need to be done if I want to use SLAAC, should my ipv6
> subnet need to be attached to router interface?
>
>
>
> On Wed, Jul 13, 2016 at 1:55 AM, Jens Rosenboom <j.rosenboom at x-ion.de>
> wrote:
>
>> 2016-07-12 20:55 GMT+02:00 suresh kumar <boilingbabu at gmail.com>:
>> > Hi All,
>> >
>> > I have created IPv6 vlan in neutron with DHCPv6 stateful option, when I
>> > create instances with this ipv6 vlan dhcp is failing to assign the IP to
>> > instances and it is assigning link local address
>> >
>> > I am able to ping the GW with link local address but not the other
>> instances
>> > on same vlan
>> >
>> > Is there any configuration need to be done in neutron to make this
>> work? my
>> > ipv6 vlan is rotatable so I didn't attached to any router interface
>> inside
>> > neutron.
>>
>> Cirros does not yet support DHCPv6, see
>> https://bugs.launchpad.net/cirros/+bug/1487041.
>>
>> It also looks like other images will only do slaac by default, so you
>> would have to explicitly setup an DHCPv6 client in your guest, e.g.
>> for ubuntu-xenial do "sudo dhclient -6 ens3".
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160713/e2e96f6f/attachment.html>


More information about the OpenStack-operators mailing list