Hi Tony,

Firstly, please reminder me what's your Magnum version. I would suggest use stable/victoria or at least stable/train. As for the output you posted:

| eaccb23e-226b-4a36-a2a5-5b9d9bbb4fea | delta-23-cluster-p4khbhat4sgp-node-2   | ACTIVE | private=10.0.0.250, 192.168.20.71 | Fedora-Atomic-27 | kube_node_10gb |
| c7d42436-7c27-4a11-897e-d09eb716e9b9 | delta-23-cluster-p4khbhat4sgp-node-0   | ACTIVE | private=10.0.0.88, 192.168.20.52  | Fedora-Atomic-27 | kube_node_10gb |

Given you can still see the image name from the OpenStack instance list command, that means the config "boot_volume_size = 10" is not being used for some reasons. In other words, I think I'm confident that the config should resolve your local disk consuming issue. But there is another issue which is preventing it works.


On 16/09/20 12:58 am, Tony Pearce wrote:
Hi Ionut, thank you for your reply. Do you know if this configuration prevents consuming of local disk on the compute node for instance storage eg OS or swap etc?

Kind regards

On Tue, 15 Sep 2020, 20:53 Ionut Biru, <ionut@fleio.com> wrote:
Hi,

To boot minions or master from volume, I use the following labels:

boot_volume_size = 20 
boot_volume_type = ssd 
availability_zone = nova                  

volume type and zone might differ on your setup.

                  

On Tue, Sep 15, 2020 at 11:23 AM Tony Pearce <tonyppe@gmail.com> wrote:
Hi Feilong, I hope you are keeping well. 

Thank you for sticking with me on this issue to try and help me here. I really appreciate it! 

I tried creating a new flavour like you suggested and using 10GB for root volume [1]. The cluster does start to be created (no error about 0mb disk) but while being created, I can check the compute node and see that the instance disk is being provisioned on the compute node [2]. I assume that this is the 10GB root volume that is specified in the flavour. 

When I list the volumes which have been created, I do not see the 10GB disk allocated on the compute node, but I do see the iSCSI network volume that has been created and attached to the instance (eg master) [3]. This is 15GB volume and this 15GB is coming from the kubernetes cluster template, under "Docker Volume Size (GB)" in the "node spec" section. There is very little data written to this volume at the time of master instance booted. 

Eventually, kube cluster failed to create with error "Status
Create_Failed: Resource CREATE failed: Error: resources.kube_minions.resources[0].resources.node_config_deployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1". I'll try and find the root cause of this later.  

What are your thoughts on this outcome? Is it possible to avoid consuming compute node disk? I require it because it cannot scale.


Kind regards,
Tony

Tony Pearce



On Mon, 14 Sep 2020 at 17:44, feilong <feilong@catalyst.net.nz> wrote:

Hi Tony,

Does your Magnum support this config https://github.com/openstack/magnum/blob/master/magnum/conf/cinder.py#L47 can you try to change it from 0 to 10? 10 means the root disk volume size for the k8s node. By default the 0 means the node will be based on image instead of volume.


On 14/09/20 9:37 pm, Tony Pearce wrote:
Hi Feilong, sure. The flavour I used has 2 CPU and 2GB memory. All other values either unset or 0mb. 
I also used the same fedora 27 image that is being used for the kubernetes cluster. 

Thank you
Tony

On Mon, 14 Sep 2020, 17:20 feilong, <feilong@catalyst.net.nz> wrote:

Hi Tony,

Could you please let me know  your flavor details? I would like to test it in my devstack environment (based on LVM). Thanks.


On 14/09/20 8:27 pm, Tony Pearce wrote:
Hi feilong, hope you are keeping well. Thank you for the info!  

For issue 1. Maybe this should be with the kayobe/kolla-ansible team. Thanks for the insight :) 

For the 2nd one, I was able to run the HOT template in your link. There's no issues at all running that multiple times concurrently while using the 0MB disk flavour. I tried four times with the last three executing one after the other so that they ran parallelly.  All were successful and completed and did not complain about the 0MB disk issue. 

Does this conclude that the error and create-failed issue relates to Magnum or could you suggest other steps to test on my side? 

Best regards,

Tony Pearce




On Thu, 10 Sep 2020 at 16:01, feilong <feilong@catalyst.net.nz> wrote:

Hi Tony,

Sorry for the late response for your thread.

For you HTTPS issue, we (Catalyst Cloud) are using Magnum with HTTPS and it works.

For the 2nd issue, I think we were misunderstanding the nodes disk capacity. I was assuming you're talking about the k8s nodes, but seems you're talking about the physical compute host. I still don't think it's a Magnum issue because a k8s master/worker nodes are just normal Nova instances and managed by Heat. So I would suggest you use a simple HOT to test it, you can use this https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab6

Most of the cloud providers or organizations who have adopted Magnum are using Ceph as far as I know, just FYI.


On 10/09/20 4:35 pm, Tony Pearce wrote:
Hi all, hope you are all keeping safe and well. I am looking for information on the following two issues that I have which surrounds Magnum project:

1. Magnum does not support Openstack API with HTTPS
2. Magnum forces compute nodes to consume disk capacity for instance data

My environment: Openstack Train deployed using Kayobe (Kolla-ansible). 

With regards to the HTTPS issue, Magnum stops working after enabling HTTPS because the certificate / CA certificate is not trusted by Magnum. The certificate which I am using is one that was purchased from GoDaddy and is trusted in web browsers (and is valid), just not trusted by the Magnum component. 

Regarding compute node disk consumption issue - I'm at a loss with regards to this and so I'm looking for more information about why this is being done and is there any way that I could avoid it?  I have storage provided by a Cinder integration and so the consumption of compute node disk for instance data I need to avoid. 

Any information the community could provide to me with regards to the above would be much appreciated. I would very much like to use the Magnum project in this deployment for Kubernetes deployment within projects. 

Thanks in advance, 

Regards,

Tony
-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
------------------------------------------------------ 
-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
------------------------------------------------------ 
-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
------------------------------------------------------ 


--
Ionut Biru - https://fleio.com
-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
------------------------------------------------------