[openstack-dev] [Magnum] API response on k8s failure

Adrian Otto adrian.otto at rackspace.com
Tue Sep 15 00:30:24 UTC 2015


Ryan,

Thanks for sharing this. Sorry you got out to a bumpy start. I suggest you do file a bug for this against magnum and we can decide how best to handle it. I can not tell from your email what the kubectl would do with the same input. We might have an opportunity to make both better.

If you need guidance for how to file a bug, feel free to email me directly and I can point you in the right direction.

Thanks,

Adrian

> On Sep 14, 2015, at 3:05 PM, Ryan Rossiter <rlrossit at linux.vnet.ibm.com> wrote:
> 
> I was giving a devstacked version of Magnum a try last week, and from a new user standpoint, I hit a big roadblock that caused me a lot of confusion. Here's my story:
> 
> I was attempting to create a pod in a k8s bay, and I provided it with an sample manifest from the Kubernetes repo. The Magnum API then returned the following error to me:
> 
> ERROR: 'NoneType' object has no attribute 'host' (HTTP 500)
> 
> I hunted down the error to be occurring here [1]. The k8s_api call was going bad, but conductor was continuing on anyways thinking the k8s API call went fine. I dug through the API calls to find the true cause of the error:
> 
> {u'status': u'Failure', u'kind': u'Status', u'code': 400, u'apiVersion': u'v1beta3', u'reason': u'BadRequest', u'message': u'Pod in version v1 cannot be handled as a Pod: no kind "Pod" is registered for version "v1"', u'metadata': {}}
> 
> It turned out the error was because the manifest I was using had apiVersion v1, not v1beta3. That was very unclear by Magnum originally sending the 500.
> 
> This all does occur within a try, but the k8s API isn't throwing any sort of exception that can be caught by [2]. Was this caused by a regression in the k8s client? It looks like the original intention of this was to catch something going wrong in k8s, and then forward on the message & error code on to let the magnum API return that.
> 
> My question here is: does this classify as a bug? This happens in more places than just the pod create. It's changing around API returns (quite a few of them), and I don't know how that is handled in the Magnum project. If we want to have this done as a blueprint, I can open that up and target it for Mitaka, and get to work. If it should be opened up as a bug, I can also do that and start work on it ASAP.
> 
> [1] https://github.com/openstack/magnum/blob/master/magnum/conductor/handlers/k8s_conductor.py#L88-L108
> [2] https://github.com/openstack/magnum/blob/master/magnum/conductor/handlers/k8s_conductor.py#L94
> 
> -- 
> Thanks,
> 
> Ryan Rossiter (rlrossit)
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list