[kolla][magnum] Unable to create a PersistentVolumeClaim with Cinder backend

Giuseppe Sannino km.giuseppesannino at gmail.com
Fri Mar 8 15:01:59 UTC 2019


Hi all,
I would need your help.
As per the subject, I'm trying to create a PersistentVolumeClaim using
Cinder as backend  on a K8S cluster I deployed via Magnum, but I'm not able
to.

I think I'm hitting one or two issues (not sure if the second depends on
the first).
Here is the story.


*>Background<*
I managed to deploy a K8S cluster using Magnum and I can also create POD or
any other K8S "entity" except persisten volumes and/or claims.

Configuration:
kolla-ansible: 7.0.1
OS Release: Rocky
Base Distro: CentOS

I applied the WA as per https://review.openstack.org/#/c/638400/ (a
previous issue I found) so K8S cluster is working fine.


*> PROBLEM <*
I've created the following storageClass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
provisioner: kubernetes.io/cinder
parameters:
  type: fast
  availability: nova

NOTE: I'm trying to use Cinder as backend.
NOTE2: I tried with an hostPath and I have no issue.

I then tried to create a PersistentVolumeClaim linking the above
StorageClass.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cinder-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: standard2

It stays in "Pending" state forever.

I had a look at the "kube-controller-manager.service" log and found the
following:

--------
Mar 08 14:25:21 kube-cluster-march-ppqfakx76wsm-master-0.novalocal
runc[3013]: I0308 14:25:21.017065       1 event.go:221]
Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default",
Name:"cinder-claim", UID:"ea41e0ac-41ad-11e9-8ffc-fa163e9b393f",
APIVersion:"v1", ResourceVersion:"112059", FieldPath:""}): type: 'Warning'
reason: 'ProvisioningFailed' Failed to provision volume with StorageClass
"standard2": OpenStack cloud provider was not initialized properly : stat
/etc/kubernetes/cloud-config: no such file or directory
--------

*>>> ISSUE 1 <<<*
It seems like that service expects the cloud provider configuration to be
in the "cloud-config" file which does not exist.

As *workaround*, I created that file and copied the content of
"kube_openstack_config" which is:

[Global]
auth-url=http://10.1.7.201:5000/v3
user-id=eed8ce6dd9734268810f3c23ca91cb19
password=M3rE6Wx9sTB94BQD2m
trust-id=
ca-file=/etc/kubernetes/ca-bundle.crt
region=RegionOne
[LoadBalancer]
use-octavia=False
subnet-id=8c9e0448-2d40-449f-a971-e3acde193678
create-monitor=yes
monitor-delay=1m
monitor-timeout=30s
monitor-max-retries=3
[BlockStorage]
bs-version=v2


*>>> ISSUE 2 <<<*
Checking again the log of the "kube-controller-manager.service" service I
can see this time:

Mar 08 14:29:21 kube-cluster-march-ppqfakx76wsm-master-0.novalocal
runc[3013]: I0308 14:29:21.403038       1 event.go:221]
Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default",
Name:"cinder-claim", UID:"ea41e0ac-41ad-11e9-8ffc-fa163e9b393f",
APIVersion:"v1", ResourceVersion:"112059", FieldPath:""}): type: 'Warning'
reason: 'ProvisioningFailed' Failed to provision volume with StorageClass
"standard2": unable to initialize cinder client for region: RegionOne, err:
unable to initialize cinder v2 client for region RegionOne: No suitable
endpoint could be found in the service catalog.



>From Openstack I see:

root at hce03:~# openstack endpoint list | grep keystone
| 011e5905a4a04e348b3fd1d3d1a1ab09 | RegionOne | keystone     | identity
    | True    | public    | http://10.1.7.201:5000                    |
| 0fb51c8e0edb45839162a1afe412f9f7 | RegionOne | keystone     | identity
    | True    | internal  | http://10.1.7.200:5000                    |
| 34c269a31260459fbd2c0967fda55b1d | RegionOne | keystone     | identity
    | True    | admin     | http://10.1.7.200:35357                   |
root at hce03:~# openstack endpoint list | grep cinder
| 55b109ed383c4705b71c589bdb0da697 | RegionOne | cinderv3     | volumev3
    | True    | admin     | http://10.1.7.200:8776/v3/%(tenant_id)s   |
| 6386e4f41af94c25ac8c305c7dbc1af4 | RegionOne | cinderv3     | volumev3
    | True    | public    | http://10.1.7.201:8776/v3/%(tenant_id)s   |
| 78e706cd0cd74a42b43adc051100b0bc | RegionOne | cinderv2     | volumev2
    | True    | admin     | http://10.1.7.200:8776/v2/%(tenant_id)s   |
| 83a7da1e426f4aa4b5cac3f4a564f480 | RegionOne | cinderv3     | volumev3
    | True    | internal  | http://10.1.7.200:8776/v3/%(tenant_id)s   |
| a322b3442a62418098554d23ae6a1061 | RegionOne | cinder       | volume
    | True    | public    | http://10.1.7.201:8776/v1/%(tenant_id)s   |
| c82a65eb60cc49348397085233882ba1 | RegionOne | cinder       | volume
    | True    | internal  | http://10.1.7.200:8776/v1/%(tenant_id)s   |
| cf2e6d9cb6b640ea876c9f5fe16123d3 | RegionOne | cinderv2     | volumev2
    | True    | public    | http://10.1.7.201:8776/v2/%(tenant_id)s   |
| e71733a7dbb94a69a44d03ee14389eb5 | RegionOne | cinderv2     | volumev2
    | True    | internal  | http://10.1.7.200:8776/v2/%(tenant_id)s   |
| e9add20fea3c4d57b2a752832455f6f1 | RegionOne | cinder       | volume
    | True    | admin     | http://10.1.7.200:8776/v1/%(tenant_id)s   |

>From the Kube Master I checked whether or not the auth_url, as per the
cloud-config is reachable, and it is:

[root at kube-cluster-march-ppqfakx76wsm-master-0 kubernetes]# curl
http://10.1.7.201:5000/v3
{"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z",
"media-types": [{"base": "application/json", "type":
"application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links":
[{"href": "http://10.1.7.201:5000/v3/", "rel": "self"}]}}

I also tried to change the block storage version from "v2" to "v1" or "v3".
Again the same error.

May I ask some support on this ?

Many thanks!

BR
/Giuseppe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190308/ee60bf0b/attachment.html>


More information about the openstack-discuss mailing list