[openstack-dev] [magnum] Issue on going through the quickstart guide
Jay Lau
jay.lau.513 at gmail.com
Mon Feb 23 01:38:18 UTC 2015
I suspect that there are some error after the pod/services parsed, can you
please use the native k8s command have a try first then debug k8s api part
to check the difference of the original json file and the parsed json file?
Thanks!
kubectl create -f xxxx.json xxx
2015-02-23 1:40 GMT+08:00 Hongbin Lu <hongbin034 at gmail.com>:
> Thanks Jay,
>
> I checked the kubelet log. There are a lot of Watch closed error like
> below. Here is the full log http://fpaste.org/188964/46261561/ .
>
> *Status:"Failure", Message:"unexpected end of JSON input", Reason:""*
> *Status:"Failure", Message:"501: All the given peers are not reachable*
>
> Please note that my environment was setup by following the quickstart
> guide. It seems that all the kube components were running (checked by using
> systemctl status command), and all nodes can ping each other. Any further
> suggestion?
>
> Thanks,
> Hongbin
>
>
> On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau <jay.lau.513 at gmail.com> wrote:
>
>> Can you check the kubelet log on your minions? Seems the container failed
>> to start, there might be something wrong for your minions node. Thanks.
>>
>> 2015-02-22 15:08 GMT+08:00 Hongbin Lu <hongbin034 at gmail.com>:
>>
>>> Hi all,
>>>
>>> I tried to go through the new redis example at the quickstart guide [1],
>>> but was not able to go through. I was blocked by connecting to the redis
>>> slave container:
>>>
>>> *$ docker exec -i -t $REDIS_ID redis-cli*
>>> *Could not connect to Redis at 127.0.0.1:6379 <http://127.0.0.1:6379>:
>>> Connection refused*
>>>
>>> Here is the container log:
>>>
>>> *$ docker logs $REDIS_ID*
>>> *Error: Server closed the connection*
>>> *Failed to find master.*
>>>
>>> It looks like the redis master disappeared at some point. I tried to
>>> check the status in about every one minute. Below is the output.
>>>
>>> *$ kubectl get pod*
>>> *NAME IMAGE(S) HOST
>>> LABELS STATUS*
>>> *51c68981-ba20-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.5/
>>> <http://10.0.0.5/>
>>> name=redis-sentinel,redis-sentinel=true,role=sentinel Pending*
>>> *redis-master kubernetes/redis:v1 10.0.0.4/
>>> <http://10.0.0.4/> name=redis,redis-sentinel=true,role=master
>>> Pending*
>>> * kubernetes/redis:v1*
>>> *512cf350-ba20-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.5/
>>> <http://10.0.0.5/> name=redis
>>> Pending*
>>>
>>> *$ kubectl get pod*
>>> *NAME IMAGE(S) HOST
>>> LABELS STATUS*
>>> *512cf350-ba20-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.5/
>>> <http://10.0.0.5/> name=redis
>>> Running*
>>> *51c68981-ba20-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.5/
>>> <http://10.0.0.5/>
>>> name=redis-sentinel,redis-sentinel=true,role=sentinel Running*
>>> *redis-master kubernetes/redis:v1 10.0.0.4/
>>> <http://10.0.0.4/> name=redis,redis-sentinel=true,role=master
>>> Running*
>>> * kubernetes/redis:v1*
>>>
>>> *$ kubectl get pod*
>>> *NAME IMAGE(S) HOST
>>> LABELS STATUS*
>>> *redis-master kubernetes/redis:v1 10.0.0.4/
>>> <http://10.0.0.4/> name=redis,redis-sentinel=true,role=master
>>> Running*
>>> * kubernetes/redis:v1*
>>> *512cf350-ba20-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.5/
>>> <http://10.0.0.5/> name=redis
>>> Failed*
>>> *51c68981-ba20-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.5/
>>> <http://10.0.0.5/>
>>> name=redis-sentinel,redis-sentinel=true,role=sentinel Running*
>>> *233fa7d1-ba21-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.5/
>>> <http://10.0.0.5/> name=redis
>>> Running*
>>>
>>> *$ kubectl get pod*
>>> *NAME IMAGE(S) HOST
>>> LABELS STATUS*
>>> *512cf350-ba20-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.5/
>>> <http://10.0.0.5/> name=redis
>>> Running*
>>> *51c68981-ba20-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.5/
>>> <http://10.0.0.5/>
>>> name=redis-sentinel,redis-sentinel=true,role=sentinel Running*
>>> *233fa7d1-ba21-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.5/
>>> <http://10.0.0.5/> name=redis
>>> Running*
>>> *3b164230-ba21-11e4-84dc-fa163e318555 kubernetes/redis:v1 10.0.0.4/
>>> <http://10.0.0.4/>
>>> name=redis-sentinel,redis-sentinel=true,role=sentinel Pending*
>>>
>>> Is anyone able to reproduce the problem above? If yes, I am going to
>>> file a bug.
>>>
>>> Thanks,
>>> Hongbin
>>>
>>> [1]
>>> https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Thanks,
>>
>> Jay Lau (Guangya Liu)
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
Thanks,
Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150223/4c62bb1e/attachment.html>
More information about the OpenStack-dev
mailing list