[magnum] cluster node fail to establish connection (wrong IP address)

Zufar Dhiyaulhaq zufar at onf-ambassador.org
Sat Dec 15 18:40:50 UTC 2018


Hi Ignazio,

this is my Heat Configuration :

[root at zu-controller0 ~(keystone_admin)]# egrep -v ^'(#|$)'
/etc/heat/heat.conf
[DEFAULT]
heat_metadata_server_url=http://10.60.60.10:8000
heat_waitcondition_server_url=http://10.60.60.10:8000/v1/waitcondition
heat_watch_server_url=http://10.60.60.10:8003
stack_user_domain_name=heat
stack_domain_admin=heat_admin
stack_domain_admin_password=b2464b7fd4694efa
num_engine_workers=4
auth_encryption_key=d954b3c680fd42fb
debug=False
log_dir=/var/log/heat
transport_url=rabbit://guest:guest@10.60.60.10:5672/
[auth_password]
[clients]
[clients_aodh]
[clients_barbican]
[clients_ceilometer]
[clients_cinder]
[clients_designate]
[clients_glance]
[clients_heat]
[clients_keystone]
auth_uri=http://10.61.61.10:35357
[clients_magnum]
[clients_manila][root at zu-controller0 ~(keystone_admin)]# egrep -v ^'(#|$)'
/etc/heat/heat.conf
[DEFAULT]
heat_metadata_server_url=http://10.60.60.10:8000
heat_waitcondition_server_url=http://10.60.60.10:8000/v1/waitcondition
heat_watch_server_url=http://10.60.60.10:8003
stack_user_domain_name=heat
stack_domain_admin=heat_admin
stack_domain_admin_password=b2464b7fd4694efa
num_engine_workers=4
auth_encryption_key=d954b3c680fd42fb
debug=False
log_dir=/var/log/heat
transport_url=rabbit://guest:guest@10.60.60.10:5672/
[auth_password]
[clients]
[clients_aodh]
[clients_barbican]
[clients_ceilometer]
[clients_cinder]
[clients_designate]
[clients_glance]
[clients_heat]
[clients_keystone]
auth_uri=http://10.61.61.10:35357
[clients_magnum]
[clients_manila]
[clients_mistral]
[clients_monasca]
[clients_neutron]
[clients_nova]
[clients_octavia]
[clients_sahara]
[clients_senlin]
[clients_swift]
[clients_trove]
[clients_zaqar]
[cors]
[database]
connection=mysql+pymysql://heat:c553beb7ef5e4ac6@10.60.60.10/heat
[ec2authtoken]
auth_uri=http://10.61.61.10:5000/v3
[eventlet_opts]
[healthcheck]
[heat_api]
workers=4
[heat_api_cfn]
workers=4
[heat_api_cloudwatch]
[matchmaker_redis]
[noauth]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
ssl=False
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
policy_file=/etc/heat/policy.json
[paste_deploy]
[profiler]
[revision]
[ssl]
[trustee]
auth_type=password
auth_url=http://10.60.60.10:35357
project_domain_name=Default
username=heat
user_domain_name=Default
password=c9b4b2e3fd704048
[volumes]
[keystone_authtoken]
auth_uri=http://10.60.60.10:5000/v3
auth_type=password
auth_url=http://10.60.60.10:35357
username=heat
password=c9b4b2e3fd704048
user_domain_name=Default
project_name=services
project_domain_name=Default
[clients_mistral]
[clients_monasca]
[clients_neutron]
[clients_nova]
[clients_octavia]
[clients_sahara]
[clients_senlin]
[clients_swift]
[clients_trove]
[clients_zaqar]
[cors]
[database]
connection=mysql+pymysql://heat:c553beb7ef5e4ac6@10.60.60.10/heat
[ec2authtoken]
auth_uri=http://10.61.61.10:5000/v3
[eventlet_opts]
[healthcheck]
[heat_api]
workers=4
[heat_api_cfn]
workers=4
[heat_api_cloudwatch]
[matchmaker_redis]
[noauth]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
ssl=False
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
policy_file=/etc/heat/policy.json
[paste_deploy]
[profiler]
[revision]
[ssl]
[trustee]
auth_type=password
auth_url=http://10.60.60.10:35357
project_domain_name=Default
username=heat
user_domain_name=Default
password=c9b4b2e3fd704048
[volumes]
[keystone_authtoken]
auth_uri=http://10.60.60.10:5000/v3
auth_type=password
auth_url=http://10.60.60.10:35357
username=heat
password=c9b4b2e3fd704048
user_domain_name=Default
project_name=services
project_domain_name=Default

and this is my magnum configuration :

[root at zu-controller0 ~(keystone_admin)]# egrep -v ^'(#|$)'
/etc/magnum/magnum.conf
[DEFAULT]
log_dir=/var/log/magnum
transport_url=rabbit://guest:guest@10.60.60.10:5672/
[cors]
[database]
connection=mysql+pymysql://magnum:b08c5fe90a0e42e7@10.60.60.10/magnum
[keystone_authtoken]
auth_uri=http://10.60.60.10:5000/v3
auth_version=v3
memcached_servers=10.60.60.10:11211
auth_type=password
admin_tenant_name=services
admin_user=magnum
admin_password=f784e0ac913e41e7
auth_url=http://10.60.60.10:35357
username=magnum
password=f784e0ac913e41e7
user_domain_name=Default
project_name=services
project_domain_name=Default
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver=messagingv2
[oslo_messaging_rabbit]
ssl=False
[oslo_messaging_zmq]
[oslo_policy]
policy_file=/etc/magnum/policy.json
[trust]
trustee_domain_name=magnum
trustee_domain_admin_name=magnum_admin
trustee_domain_admin_password=f784e0ac913e41e7
trustee_keystone_interface=public
[api]
port=9511
host=0.0.0.0
max_limit=1000
enabled_ssl=False
[barbican_client]
region_name=RegionOne
[cinder_client]
region_name=RegionOne
[glance_client]
region_name=RegionOne
api_version=2
insecure=False
[heat_client]
region_name=RegionOne
api_version=1
insecure=False
[magnum_client]
region_name=RegionOne
[neutron_client]
region_name=RegionOne
insecure=False
[nova_client]
region_name=RegionOne
api_version=2
insecure=False
[certificates]
cert_manager_type=local

Best Regards,
Zufar Dhiyaulhaq


On Sun, Dec 16, 2018 at 1:39 AM Zufar Dhiyaulhaq <zufar at onf-ambassador.org>
wrote:

> Hi Ignazio,
>
> I try to change the heat configuration as you suggested, but still my
> instance are try to 10.60.60.10.
> this is the output :
>
> | 0c34c8b0dcdc459f818c2bab04913039 | RegionOne | keystone     |
> identity        | True    | public    | http://10.60.60.10:5000/v3
> |
> | 8ca42fccdb774f9daf67c95ea61fd006 | RegionOne | keystone     |
> identity        | True    | admin     | http://10.60.60.10:35357/v3
> |
> | 8d0a23730608495980dce151d466a579 | RegionOne | keystone     |
> identity        | True    | internal  | http://10.60.60.10:5000/v3
> |
>
> testing curl :
>
> @swarm-cluster-wtet5ppc5eqi-primary-master-0 ~]$ curl
> http://10.60.60.10:5000/v3
> ^C
> [fedora at swarm-cluster-wtet5ppc5eqi-primary-master-0 ~]$ curl
> http://10.61.61.10:5000/v3
> {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z",
> "media-types": [{"base": "application/json", "type":
> "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links":
> [{"href": "http://10.61.61.10:5000/v3/", "rel":
> "self"}]}}[fedora at swarm-cluster-wtet5ppc5eqi-primary-master-0 ~]$
>
>
> Best Regards,
> Zufar Dhiyaulhaq
>
>
> On Sun, Dec 16, 2018 at 1:20 AM Ignazio Cassano <ignaziocassano at gmail.com>
> wrote:
>
>> What reports the command
>> openstack endpoint list | grep keystone
>>
>> ?
>>
>>
>> Il giorno Sab 15 Dic 2018 19:16 Zufar Dhiyaulhaq <
>> zufar at onf-ambassador.org> ha scritto:
>>
>>> By Design, my OpenStack have 2 networks.
>>>
>>> External Network (Floating IP) : 10.61.61.0/24
>>> Management Network & Data Network (private): 10.60.60.0/24
>>>
>>> An instance that deploys with floating IP cant access to the management
>>> network. It is possible to change the URL from the error log (10.60.60.10)
>>> to 10.61.61.10?
>>>
>>> Best Regards,
>>> Zufar Dhiyaulhaq
>>>
>>>
>>> On Sun, Dec 16, 2018 at 1:12 AM Zufar Dhiyaulhaq <
>>> zufar at onf-ambassador.org> wrote:
>>>
>>>> Yes. Master Node communicates with port 5000, but with a wrong IP
>>>> address.
>>>>
>>>> Floating IP give to my instance is in 10.61.61.0/24 but all of my
>>>> endpoints are in 10.60.60.0/24, but when I try to curl manual the
>>>> endpoint with 10.61.61.0/24 it's working fine.
>>>> It is possible to change the IP address installed on cloud-init to
>>>> 10.61.61.X?
>>>>
>>>> Best Regards,
>>>> Zufar Dhiyaulhaq
>>>>
>>>>
>>>> On Sun, Dec 16, 2018 at 1:08 AM Ignazio Cassano <
>>>> ignaziocassano at gmail.com> wrote:
>>>>
>>>>> Zufar, the master node must communicat with port 5000 on the network
>>>>> where you deployed keystone endpoint.
>>>>> Ignazio
>>>>> Il giorno Sab 15 Dic 2018 18:58 Zufar Dhiyaulhaq <
>>>>> zufar at onf-ambassador.org> ha scritto:
>>>>>
>>>>>> Hi I am creating a swarm cluster with this command :
>>>>>>
>>>>>>    - openstack coe cluster template create swarm-cluster-template
>>>>>>    --image fedora-atomic-latest --external-network external --dns-nameserver
>>>>>>    8.8.8.8 --master-flavor m1.small --flavor m1.small --coe swarm-mode
>>>>>>    --docker-volume-size 4 --docker-storage-driver=devicemapper
>>>>>>    - openstack coe cluster create swarm-cluster --cluster-template
>>>>>>    swarm-cluster-template --master-count 1 --node-count 1 --keypair mykey
>>>>>>
>>>>>> but its stack on *swarm_primary_master *heat with
>>>>>> *CREATE_IN_PROGRESS*. I try to login into swarm VM and see the log.
>>>>>>
>>>>>> I get this :
>>>>>>
>>>>>>
>>>>>>    - requests.exceptions.ConnectionError:
>>>>>>    HTTPConnectionPool(host='10.60.60.10', port=5000): Max retries exceeded
>>>>>>    with url: /v3/auth/tokens (Caused by
>>>>>>    NewConnectionError('<urllib3.connection.HTTPConnection object at
>>>>>>    0x7f02f57321d0>: Failed to establish a new connection: [Errno 110]
>>>>>>    Connection timed out',))
>>>>>>    - Cloud-init v. 0.7.9 running 'modules:final' at Sat, 15 Dec 2018
>>>>>>    17:36:44 +0000. Up 33.29 seconds.
>>>>>>
>>>>>>
>>>>>> Swarm Instance try to open connection to 10.60.60.10 which is my
>>>>>> management ip address on OpenStack.  But it cannot (by design it cannot, I
>>>>>> try to curl manual to 10.60.60.10 and error). When curl to 10.61.61.10
>>>>>> which is my floating IP and external IP for OpenStack cluster, it works.
>>>>>>
>>>>>> Anyone know how to change the cloud-init to curl into 10.61.61.10?
>>>>>>
>>>>>> Best Regards,
>>>>>> Zufar Dhiyaulhaq
>>>>>>
>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20181216/a60a2191/attachment-0001.html>


More information about the openstack-discuss mailing list