Tenant vlan Network Issue
Hello Team, Whenever I create an internal network with type vlan. The Instances don't get IPs but for external Network it's fine. Below is the etc/kolla/config/neutron/ml2_conf.ini file [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vlan,flat mechanism_drivers = openvswitch,l2population [ml2_type_vlan] network_vlan_ranges = physnet1 [ml2_type_flat] flat_networks = physnet1 Is there something I need to change or I need to configure the interface differently Regards Tony Karera
Your issue is that tenant_network_types should be vxlan. On Sat, Aug 28, 2021 at 4:29 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Team,
Whenever I create an internal network with type vlan.
The Instances don't get IPs but for external Network it's fine.
Below is the etc/kolla/config/neutron/ml2_conf.ini file
[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vlan,flat mechanism_drivers = openvswitch,l2population
[ml2_type_vlan] network_vlan_ranges = physnet1 [ml2_type_flat] flat_networks = physnet1
Is there something I need to change or I need to configure the interface differently Regards
Tony Karera
-- Mohammed Naser VEXXHOST, Inc.
How can i make them both vxlan and vlan On Sat, 28 Aug 2021, 17:43 Mohammed Naser, <mnaser@vexxhost.com> wrote:
Your issue is that tenant_network_types should be vxlan.
On Sat, Aug 28, 2021 at 4:29 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Team,
Whenever I create an internal network with type vlan.
The Instances don't get IPs but for external Network it's fine.
Below is the etc/kolla/config/neutron/ml2_conf.ini file
[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vlan,flat mechanism_drivers = openvswitch,l2population
[ml2_type_vlan] network_vlan_ranges = physnet1 [ml2_type_flat] flat_networks = physnet1
Is there something I need to change or I need to configure the interface differently Regards
Tony Karera
-- Mohammed Naser VEXXHOST, Inc.
Hi, You can use vlan as tenant network too. But You have to remember that vlan is provider network, which means that You need to have configured vlans on Your network infrastructure too. And in case of tenant network, Neutron will automatically allocate one of the vlan ids to the network as user don't have possibility to choose vlan id for network (at least that's default policy in Neutron). So, You should check what vlan id is used by such tenant network, check if this vlan is properly confiugred on Your switches and then, if all is good, check where exactly such dhcp requests are dropped (are they going from compute node? are packets comming to the node with dhcp agent?). Base on that You can narrow down where the issue can be. On sobota, 28 sierpnia 2021 18:07:07 CEST Karera Tony wrote:
How can i make them both vxlan and vlan
On Sat, 28 Aug 2021, 17:43 Mohammed Naser, <mnaser@vexxhost.com> wrote:
Your issue is that tenant_network_types should be vxlan.
On Sat, Aug 28, 2021 at 4:29 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Team,
Whenever I create an internal network with type vlan.
The Instances don't get IPs but for external Network it's fine.
Below is the etc/kolla/config/neutron/ml2_conf.ini file
[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vlan,flat mechanism_drivers = openvswitch,l2population
[ml2_type_vlan] network_vlan_ranges = physnet1 [ml2_type_flat] flat_networks = physnet1
Is there something I need to change or I need to configure the interface differently Regards
Tony Karera
--
Mohammed Naser VEXXHOST, Inc.
-- Slawek Kaplonski Principal Software Engineer Red Hat
Hello Slawek, I really need your help. I have tried all I can but still the k8s-cluster gets stuck with this log here root@k8s-cluster-1-vo5s5jvtjoa7-master-0 core]# cd /var/log/heat-config/heat-config-script/ [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# ls a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7-kube_cluster_config-asfqq352zosg.log c42dcbf1-7812-49a2-b312-96c24225b407-k8s-cluster-1-vo5s5jvtjoa7-kube_masters-waugw6vksw5p-0-vrt2gkfugedg-master_config-xv3vlmr27r46.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7-kube_cluster_config-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7-kube_cluster_config-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7-kube_cluster_config-asfqq352zosg.log Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' + sleep 5 ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' + sleep 5 Regards Tony Karera On Mon, Aug 30, 2021 at 8:50 AM Slawek Kaplonski <skaplons@redhat.com> wrote:
Hi,
You can use vlan as tenant network too. But You have to remember that vlan is provider network, which means that You need to have configured vlans on Your network infrastructure too. And in case of tenant network, Neutron will automatically allocate one of the vlan ids to the network as user don't have possibility to choose vlan id for network (at least that's default policy in Neutron). So, You should check what vlan id is used by such tenant network, check if this vlan is properly confiugred on Your switches and then, if all is good, check where exactly such dhcp requests are dropped (are they going from compute node? are packets comming to the node with dhcp agent?). Base on that You can narrow down where the issue can be.
On sobota, 28 sierpnia 2021 18:07:07 CEST Karera Tony wrote:
How can i make them both vxlan and vlan
On Sat, 28 Aug 2021, 17:43 Mohammed Naser, <mnaser@vexxhost.com> wrote:
Your issue is that tenant_network_types should be vxlan.
On Sat, Aug 28, 2021 at 4:29 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Team,
Whenever I create an internal network with type vlan.
The Instances don't get IPs but for external Network it's fine.
Below is the etc/kolla/config/neutron/ml2_conf.ini file
[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vlan,flat mechanism_drivers = openvswitch,l2population
[ml2_type_vlan] network_vlan_ranges = physnet1 [ml2_type_flat] flat_networks = physnet1
Is there something I need to change or I need to configure the interface differently Regards
Tony Karera
--
Mohammed Naser VEXXHOST, Inc.
-- Slawek Kaplonski Principal Software Engineer Red Hat
Hi, On środa, 8 września 2021 15:16:57 CEST Karera Tony wrote:
Hello Slawek,
I really need your help.
I have tried all I can but still the k8s-cluster gets stuck with this log here
root@k8s-cluster-1-vo5s5jvtjoa7-master-0 core]# cd /var/log/heat-config/heat-config-script/ [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# ls a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log c42dcbf1-7812-49a2-b312-96c24225b407-k8s-cluster-1-vo5s5jvtjoa7- kube_masters- waugw6vksw5p-0-vrt2gkfugedg-master_config-xv3vlmr27r46.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' + sleep 5 ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
But why Your script is trying to connect to localhost? It's not the VM address I guess :)
+ '[' ok = '' ']' + sleep 5
Regards
Tony Karera
On Mon, Aug 30, 2021 at 8:50 AM Slawek Kaplonski <skaplons@redhat.com>
wrote:
Hi,
You can use vlan as tenant network too. But You have to remember that vlan is provider network, which means that You need to have configured vlans on Your network infrastructure too. And in case of tenant network, Neutron will automatically allocate one of the vlan ids to the network as user don't have possibility to choose vlan id for network (at least that's default policy in Neutron). So, You should check what vlan id is used by such tenant network, check if this vlan is properly confiugred on Your switches and then, if all is good, check where exactly such dhcp requests are dropped (are they going from compute node? are packets comming to the node with dhcp agent?). Base on that You can narrow down where the issue can be.
On sobota, 28 sierpnia 2021 18:07:07 CEST Karera Tony wrote:
How can i make them both vxlan and vlan
On Sat, 28 Aug 2021, 17:43 Mohammed Naser, <mnaser@vexxhost.com> wrote:
Your issue is that tenant_network_types should be vxlan.
On Sat, Aug 28, 2021 at 4:29 AM Karera Tony <tonykarera@gmail.com>
wrote:
Hello Team,
Whenever I create an internal network with type vlan.
The Instances don't get IPs but for external Network it's fine.
Below is the etc/kolla/config/neutron/ml2_conf.ini file
[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vlan,flat mechanism_drivers = openvswitch,l2population
[ml2_type_vlan] network_vlan_ranges = physnet1 [ml2_type_flat] flat_networks = physnet1
Is there something I need to change or I need to configure the
interface
differently Regards
Tony Karera
--
Mohammed Naser VEXXHOST, Inc.
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Slawek Kaplonski Principal Software Engineer Red Hat
Hello Slawek, Unfortunately I am not sure because all I did is configure the cluster. Nothing else. Regards Tony Karera On Wed, Sep 8, 2021 at 3:31 PM Slawek Kaplonski <skaplons@redhat.com> wrote:
Hi,
On środa, 8 września 2021 15:16:57 CEST Karera Tony wrote:
Hello Slawek,
I really need your help.
I have tried all I can but still the k8s-cluster gets stuck with this log here
root@k8s-cluster-1-vo5s5jvtjoa7-master-0 core]# cd /var/log/heat-config/heat-config-script/ [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# ls a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log c42dcbf1-7812-49a2-b312-96c24225b407-k8s-cluster-1-vo5s5jvtjoa7- kube_masters- waugw6vksw5p-0-vrt2gkfugedg-master_config-xv3vlmr27r46.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' + sleep 5 ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
But why Your script is trying to connect to localhost? It's not the VM address I guess :)
+ '[' ok = '' ']' + sleep 5
Regards
Tony Karera
On Mon, Aug 30, 2021 at 8:50 AM Slawek Kaplonski <skaplons@redhat.com>
Hi,
You can use vlan as tenant network too. But You have to remember that vlan is provider network, which means that You need to have configured vlans on Your network infrastructure too. And in case of tenant network, Neutron will automatically allocate one of the vlan ids to the network as user don't have possibility to choose vlan id for network (at least that's default
wrote: policy
in Neutron). So, You should check what vlan id is used by such tenant network, check if this vlan is properly confiugred on Your switches and then, if all is good, check where exactly such dhcp requests are dropped (are they going from compute node? are packets comming to the node with dhcp agent?). Base on that You can narrow down where the issue can be.
On sobota, 28 sierpnia 2021 18:07:07 CEST Karera Tony wrote:
How can i make them both vxlan and vlan
On Sat, 28 Aug 2021, 17:43 Mohammed Naser, <mnaser@vexxhost.com> wrote:
Your issue is that tenant_network_types should be vxlan.
On Sat, Aug 28, 2021 at 4:29 AM Karera Tony <tonykarera@gmail.com>
wrote:
Hello Team,
Whenever I create an internal network with type vlan.
The Instances don't get IPs but for external Network it's fine.
Below is the etc/kolla/config/neutron/ml2_conf.ini file
[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vlan,flat mechanism_drivers = openvswitch,l2population
[ml2_type_vlan] network_vlan_ranges = physnet1 [ml2_type_flat] flat_networks = physnet1
Is there something I need to change or I need to configure the
interface
differently Regards
Tony Karera
--
Mohammed Naser VEXXHOST, Inc.
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Slawek Kaplonski Principal Software Engineer Red Hat
Hi karera, I don’t think your the error is related to network side. Its because kublet is not installed properly in your kubernetes master node. You need to check logs that why hyperkube services are not started. You can check it in heat logs on mater node and you can check the status of kubelet container via podman ps commad. Ammad On Wed, Sep 8, 2021 at 8:28 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Slawek,
Unfortunately I am not sure because all I did is configure the cluster. Nothing else.
Regards
Tony Karera
On Wed, Sep 8, 2021 at 3:31 PM Slawek Kaplonski <skaplons@redhat.com> wrote:
Hi,
On środa, 8 września 2021 15:16:57 CEST Karera Tony wrote:
Hello Slawek,
I really need your help.
I have tried all I can but still the k8s-cluster gets stuck with this log here
root@k8s-cluster-1-vo5s5jvtjoa7-master-0 core]# cd /var/log/heat-config/heat-config-script/ [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# ls a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log c42dcbf1-7812-49a2-b312-96c24225b407-k8s-cluster-1-vo5s5jvtjoa7- kube_masters- waugw6vksw5p-0-vrt2gkfugedg-master_config-xv3vlmr27r46.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' + sleep 5 ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
But why Your script is trying to connect to localhost? It's not the VM address I guess :)
+ '[' ok = '' ']' + sleep 5
Regards
Tony Karera
On Mon, Aug 30, 2021 at 8:50 AM Slawek Kaplonski <skaplons@redhat.com>
Hi,
You can use vlan as tenant network too. But You have to remember that vlan is provider network, which means that You need to have configured vlans on Your network infrastructure too. And in case of tenant network, Neutron will automatically allocate one of the vlan ids to the network as user don't have possibility to choose vlan id for network (at least that's default
wrote: policy
in Neutron). So, You should check what vlan id is used by such tenant network, check if this vlan is properly confiugred on Your switches and then, if all is good, check where exactly such dhcp requests are dropped (are they going from compute node? are packets comming to the node with dhcp agent?). Base on that You can narrow down where the issue can be.
On sobota, 28 sierpnia 2021 18:07:07 CEST Karera Tony wrote:
How can i make them both vxlan and vlan
On Sat, 28 Aug 2021, 17:43 Mohammed Naser, <mnaser@vexxhost.com> wrote:
Your issue is that tenant_network_types should be vxlan.
On Sat, Aug 28, 2021 at 4:29 AM Karera Tony <tonykarera@gmail.com
wrote:
> Hello Team, > > Whenever I create an internal network with type vlan. > > The Instances don't get IPs but for external Network it's fine. > > > Below is the etc/kolla/config/neutron/ml2_conf.ini file > > > [ml2] > type_drivers = flat,vlan,vxlan > tenant_network_types = vlan,flat > mechanism_drivers = openvswitch,l2population > > [ml2_type_vlan] > network_vlan_ranges = physnet1 > [ml2_type_flat] > flat_networks = physnet1 > > Is there something I need to change or I need to configure the
interface
> differently > Regards > > Tony Karera > > > --
Mohammed Naser VEXXHOST, Inc.
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Regards,
Syed Ammad Ali
Hello Team, I tried to create another cluster with tls enabled and below are the results for podman ps . I have also attached the heat logs podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e9c028723166 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 34 minutes ago Up 34 minutes ago heat-container-agent 4128a9e951de k8s.gcr.io/hyperkube:v1.18.16 kube-apiserver --... 30 minutes ago Up 30 minutes ago kube-apiserver 5430aa72e7f0 k8s.gcr.io/hyperkube:v1.18.16 kube-controller-m... 30 minutes ago Up 30 minutes ago kube-controller-manager 8f2c76272916 k8s.gcr.io/hyperkube:v1.18.16 kube-scheduler --... 30 minutes ago Up 30 minutes ago kube-scheduler af5f1a661c65 quay.io/coreos/etcd:v3.4.6 /usr/local/bin/et... 30 minutes ago Up 30 minutes ago etcd Regards Tony Karera On Wed, Sep 8, 2021 at 5:37 PM Ammad Syed <syedammad83@gmail.com> wrote:
Hi karera,
I don’t think your the error is related to network side. Its because kublet is not installed properly in your kubernetes master node. You need to check logs that why hyperkube services are not started.
You can check it in heat logs on mater node and you can check the status of kubelet container via podman ps commad.
Ammad On Wed, Sep 8, 2021 at 8:28 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Slawek,
Unfortunately I am not sure because all I did is configure the cluster. Nothing else.
Regards
Tony Karera
On Wed, Sep 8, 2021 at 3:31 PM Slawek Kaplonski <skaplons@redhat.com> wrote:
Hi,
On środa, 8 września 2021 15:16:57 CEST Karera Tony wrote:
Hello Slawek,
I really need your help.
I have tried all I can but still the k8s-cluster gets stuck with this log here
root@k8s-cluster-1-vo5s5jvtjoa7-master-0 core]# cd /var/log/heat-config/heat-config-script/ [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# ls a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log c42dcbf1-7812-49a2-b312-96c24225b407-k8s-cluster-1-vo5s5jvtjoa7- kube_masters- waugw6vksw5p-0-vrt2gkfugedg-master_config-xv3vlmr27r46.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' + sleep 5 ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
But why Your script is trying to connect to localhost? It's not the VM address I guess :)
+ '[' ok = '' ']' + sleep 5
Regards
Tony Karera
On Mon, Aug 30, 2021 at 8:50 AM Slawek Kaplonski <skaplons@redhat.com>
Hi,
You can use vlan as tenant network too. But You have to remember
is provider network, which means that You need to have configured vlans on Your network infrastructure too. And in case of tenant network, Neutron will automatically allocate one of the vlan ids to the network as user don't have possibility to choose vlan id for network (at least that's default
wrote: that vlan policy
in Neutron). So, You should check what vlan id is used by such tenant network, check if this vlan is properly confiugred on Your switches and then, if all is good, check where exactly such dhcp requests are dropped (are they going from compute node? are packets comming to the node with dhcp agent?). Base on that You can narrow down where the issue can be.
On sobota, 28 sierpnia 2021 18:07:07 CEST Karera Tony wrote:
How can i make them both vxlan and vlan
On Sat, 28 Aug 2021, 17:43 Mohammed Naser, <mnaser@vexxhost.com> wrote: > Your issue is that tenant_network_types should be vxlan. > > On Sat, Aug 28, 2021 at 4:29 AM Karera Tony < tonykarera@gmail.com>
wrote:
>> Hello Team, >> >> Whenever I create an internal network with type vlan. >> >> The Instances don't get IPs but for external Network it's fine. >> >> >> Below is the etc/kolla/config/neutron/ml2_conf.ini file >> >> >> [ml2] >> type_drivers = flat,vlan,vxlan >> tenant_network_types = vlan,flat >> mechanism_drivers = openvswitch,l2population >> >> [ml2_type_vlan] >> network_vlan_ranges = physnet1 >> [ml2_type_flat] >> flat_networks = physnet1 >> >> Is there something I need to change or I need to configure the
interface
>> differently >> Regards >> >> Tony Karera >> >> >> -- > > Mohammed Naser > VEXXHOST, Inc.
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Regards,
Syed Ammad Ali
You can see here that kubelet container is not started. You need to check kubelet logs. Can you confirm which fcos version you are using? Ammad On Wed, Sep 8, 2021 at 9:17 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Team,
I tried to create another cluster with tls enabled and below are the results for podman ps . I have also attached the heat logs
podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e9c028723166 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 34 minutes ago Up 34 minutes ago heat-container-agent 4128a9e951de k8s.gcr.io/hyperkube:v1.18.16 kube-apiserver --... 30 minutes ago Up 30 minutes ago kube-apiserver 5430aa72e7f0 k8s.gcr.io/hyperkube:v1.18.16 kube-controller-m... 30 minutes ago Up 30 minutes ago kube-controller-manager 8f2c76272916 k8s.gcr.io/hyperkube:v1.18.16 kube-scheduler --... 30 minutes ago Up 30 minutes ago kube-scheduler af5f1a661c65 quay.io/coreos/etcd:v3.4.6 /usr/local/bin/et... 30 minutes ago Up 30 minutes ago etcd
Regards
Tony Karera
On Wed, Sep 8, 2021 at 5:37 PM Ammad Syed <syedammad83@gmail.com> wrote:
Hi karera,
I don’t think your the error is related to network side. Its because kublet is not installed properly in your kubernetes master node. You need to check logs that why hyperkube services are not started.
You can check it in heat logs on mater node and you can check the status of kubelet container via podman ps commad.
Ammad On Wed, Sep 8, 2021 at 8:28 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Slawek,
Unfortunately I am not sure because all I did is configure the cluster. Nothing else.
Regards
Tony Karera
On Wed, Sep 8, 2021 at 3:31 PM Slawek Kaplonski <skaplons@redhat.com> wrote:
Hi,
On środa, 8 września 2021 15:16:57 CEST Karera Tony wrote:
Hello Slawek,
I really need your help.
I have tried all I can but still the k8s-cluster gets stuck with this log here
root@k8s-cluster-1-vo5s5jvtjoa7-master-0 core]# cd /var/log/heat-config/heat-config-script/ [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# ls a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log c42dcbf1-7812-49a2-b312-96c24225b407-k8s-cluster-1-vo5s5jvtjoa7- kube_masters- waugw6vksw5p-0-vrt2gkfugedg-master_config-xv3vlmr27r46.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' + sleep 5 ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
But why Your script is trying to connect to localhost? It's not the VM address I guess :)
+ '[' ok = '' ']' + sleep 5
Regards
Tony Karera
On Mon, Aug 30, 2021 at 8:50 AM Slawek Kaplonski <skaplons@redhat.com
Hi,
You can use vlan as tenant network too. But You have to remember
is provider network, which means that You need to have configured vlans on Your network infrastructure too. And in case of tenant network, Neutron will automatically allocate one of the vlan ids to the network as user don't have possibility to choose vlan id for network (at least that's default
wrote: that vlan policy
in Neutron). So, You should check what vlan id is used by such tenant network, check if this vlan is properly confiugred on Your switches and then, if all is good, check where exactly such dhcp requests are dropped (are they going from compute node? are packets comming to the node with dhcp agent?). Base on that You can narrow down where the issue can be.
On sobota, 28 sierpnia 2021 18:07:07 CEST Karera Tony wrote: > How can i make them both vxlan and vlan > > On Sat, 28 Aug 2021, 17:43 Mohammed Naser, <mnaser@vexxhost.com> wrote: > > Your issue is that tenant_network_types should be vxlan. > > > > On Sat, Aug 28, 2021 at 4:29 AM Karera Tony < tonykarera@gmail.com>
wrote: > >> Hello Team, > >> > >> Whenever I create an internal network with type vlan. > >> > >> The Instances don't get IPs but for external Network it's fine. > >> > >> > >> Below is the etc/kolla/config/neutron/ml2_conf.ini file > >> > >> > >> [ml2] > >> type_drivers = flat,vlan,vxlan > >> tenant_network_types = vlan,flat > >> mechanism_drivers = openvswitch,l2population > >> > >> [ml2_type_vlan] > >> network_vlan_ranges = physnet1 > >> [ml2_type_flat] > >> flat_networks = physnet1 > >> > >> Is there something I need to change or I need to configure the
interface
> >> differently > >> Regards > >> > >> Tony Karera > >> > >> > >> -- > > > > Mohammed Naser > > VEXXHOST, Inc.
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
Hello Ammad, Yes I can see. I cant see the kubelet logs. Please advise on how to check the fcos Logs. It is new to me unfortunately. Regards Tony Karera On Wed, Sep 8, 2021 at 6:21 PM Ammad Syed <syedammad83@gmail.com> wrote:
You can see here that kubelet container is not started. You need to check kubelet logs.
Can you confirm which fcos version you are using?
Ammad On Wed, Sep 8, 2021 at 9:17 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Team,
I tried to create another cluster with tls enabled and below are the results for podman ps . I have also attached the heat logs
podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e9c028723166 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 34 minutes ago Up 34 minutes ago heat-container-agent 4128a9e951de k8s.gcr.io/hyperkube:v1.18.16 kube-apiserver --... 30 minutes ago Up 30 minutes ago kube-apiserver 5430aa72e7f0 k8s.gcr.io/hyperkube:v1.18.16 kube-controller-m... 30 minutes ago Up 30 minutes ago kube-controller-manager 8f2c76272916 k8s.gcr.io/hyperkube:v1.18.16 kube-scheduler --... 30 minutes ago Up 30 minutes ago kube-scheduler af5f1a661c65 quay.io/coreos/etcd:v3.4.6 /usr/local/bin/et... 30 minutes ago Up 30 minutes ago etcd
Regards
Tony Karera
On Wed, Sep 8, 2021 at 5:37 PM Ammad Syed <syedammad83@gmail.com> wrote:
Hi karera,
I don’t think your the error is related to network side. Its because kublet is not installed properly in your kubernetes master node. You need to check logs that why hyperkube services are not started.
You can check it in heat logs on mater node and you can check the status of kubelet container via podman ps commad.
Ammad On Wed, Sep 8, 2021 at 8:28 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Slawek,
Unfortunately I am not sure because all I did is configure the cluster. Nothing else.
Regards
Tony Karera
On Wed, Sep 8, 2021 at 3:31 PM Slawek Kaplonski <skaplons@redhat.com> wrote:
Hi,
Hello Slawek,
I really need your help.
I have tried all I can but still the k8s-cluster gets stuck with
On środa, 8 września 2021 15:16:57 CEST Karera Tony wrote: this log
here
root@k8s-cluster-1-vo5s5jvtjoa7-master-0 core]# cd /var/log/heat-config/heat-config-script/ [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# ls a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log c42dcbf1-7812-49a2-b312-96c24225b407-k8s-cluster-1-vo5s5jvtjoa7- kube_masters- waugw6vksw5p-0-vrt2gkfugedg-master_config-xv3vlmr27r46.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c onfig-asfqq352zosg.log Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' + sleep 5 ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
But why Your script is trying to connect to localhost? It's not the VM address I guess :)
+ '[' ok = '' ']' + sleep 5
Regards
Tony Karera
On Mon, Aug 30, 2021 at 8:50 AM Slawek Kaplonski < skaplons@redhat.com>
wrote: > Hi, > > You can use vlan as tenant network too. But You have to remember that vlan > is > provider network, which means that You need to have configured vlans on > Your > network infrastructure too. And in case of tenant network, Neutron will > automatically allocate one of the vlan ids to the network as user don't > have > possibility to choose vlan id for network (at least that's default policy > in > Neutron). > So, You should check what vlan id is used by such tenant network, check if > this vlan is properly confiugred on Your switches and then, if all is > good, > check where exactly such dhcp requests are dropped (are they going from > compute node? are packets comming to the node with dhcp agent?). Base on > that > You can narrow down where the issue can be. > > On sobota, 28 sierpnia 2021 18:07:07 CEST Karera Tony wrote: > > How can i make them both vxlan and vlan > > > > On Sat, 28 Aug 2021, 17:43 Mohammed Naser, <mnaser@vexxhost.com> wrote: > > > Your issue is that tenant_network_types should be vxlan. > > > > > > On Sat, Aug 28, 2021 at 4:29 AM Karera Tony < tonykarera@gmail.com> > > wrote: > > >> Hello Team, > > >> > > >> Whenever I create an internal network with type vlan. > > >> > > >> The Instances don't get IPs but for external Network it's fine. > > >> > > >> > > >> Below is the etc/kolla/config/neutron/ml2_conf.ini file > > >> > > >> > > >> [ml2] > > >> type_drivers = flat,vlan,vxlan > > >> tenant_network_types = vlan,flat > > >> mechanism_drivers = openvswitch,l2population > > >> > > >> [ml2_type_vlan] > > >> network_vlan_ranges = physnet1 > > >> [ml2_type_flat] > > >> flat_networks = physnet1 > > >> > > >> Is there something I need to change or I need to configure the > > interface > > > >> differently > > >> Regards > > >> > > >> Tony Karera > > >> > > >> > > >> -- > > > > > > Mohammed Naser > > > VEXXHOST, Inc. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
Can you confirm fedora coreOS version image you are using for magnum clusters ? I suspect you are using 34. Which have trouble with kubelet service. You should use version 33 or 32. You can check systemctl status kubelet service and its logs. On Wed, Sep 8, 2021 at 9:33 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
Yes I can see.
I cant see the kubelet logs. Please advise on how to check the fcos Logs.
It is new to me unfortunately.
Regards
Tony Karera
On Wed, Sep 8, 2021 at 6:21 PM Ammad Syed <syedammad83@gmail.com> wrote:
You can see here that kubelet container is not started. You need to check kubelet logs.
Can you confirm which fcos version you are using?
Ammad On Wed, Sep 8, 2021 at 9:17 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Team,
I tried to create another cluster with tls enabled and below are the results for podman ps . I have also attached the heat logs
podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e9c028723166 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 34 minutes ago Up 34 minutes ago heat-container-agent 4128a9e951de k8s.gcr.io/hyperkube:v1.18.16 kube-apiserver --... 30 minutes ago Up 30 minutes ago kube-apiserver 5430aa72e7f0 k8s.gcr.io/hyperkube:v1.18.16 kube-controller-m... 30 minutes ago Up 30 minutes ago kube-controller-manager 8f2c76272916 k8s.gcr.io/hyperkube:v1.18.16 kube-scheduler --... 30 minutes ago Up 30 minutes ago kube-scheduler af5f1a661c65 quay.io/coreos/etcd:v3.4.6 /usr/local/bin/et... 30 minutes ago Up 30 minutes ago etcd
Regards
Tony Karera
On Wed, Sep 8, 2021 at 5:37 PM Ammad Syed <syedammad83@gmail.com> wrote:
Hi karera,
I don’t think your the error is related to network side. Its because kublet is not installed properly in your kubernetes master node. You need to check logs that why hyperkube services are not started.
You can check it in heat logs on mater node and you can check the status of kubelet container via podman ps commad.
Ammad On Wed, Sep 8, 2021 at 8:28 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Slawek,
Unfortunately I am not sure because all I did is configure the cluster. Nothing else.
Regards
Tony Karera
On Wed, Sep 8, 2021 at 3:31 PM Slawek Kaplonski <skaplons@redhat.com> wrote:
Hi,
On środa, 8 września 2021 15:16:57 CEST Karera Tony wrote: > Hello Slawek, > > I really need your help. > > I have tried all I can but still the k8s-cluster gets stuck with this log > here > > root@k8s-cluster-1-vo5s5jvtjoa7-master-0 core]# cd > /var/log/heat-config/heat-config-script/ > [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# ls > a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c > onfig-asfqq352zosg.log > c42dcbf1-7812-49a2-b312-96c24225b407-k8s-cluster-1-vo5s5jvtjoa7- kube_masters- > waugw6vksw5p-0-vrt2gkfugedg-master_config-xv3vlmr27r46.log > [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat > a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c > onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 > heat-config-script]# cat > a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c > onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 > heat-config-script]# cat > a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- kube_cluster_c > onfig-asfqq352zosg.log Starting to run kube-apiserver-to-kubelet-role > + echo 'Waiting for Kubernetes API...' > Waiting for Kubernetes API... > ++ kubectl get --raw=/healthz > The connection to the server localhost:8080 was refused - did you specify > the right host or port? > + '[' ok = '' ']' > + sleep 5 > ++ kubectl get --raw=/healthz > The connection to the server localhost:8080 was refused - did you specify > the right host or port?
But why Your script is trying to connect to localhost? It's not the VM address I guess :)
> + '[' ok = '' ']' > + sleep 5 > > > Regards > > Tony Karera > > > > > On Mon, Aug 30, 2021 at 8:50 AM Slawek Kaplonski < skaplons@redhat.com> > > wrote: > > Hi, > > > > You can use vlan as tenant network too. But You have to remember that vlan > > is > > provider network, which means that You need to have configured vlans on > > Your > > network infrastructure too. And in case of tenant network, Neutron will > > automatically allocate one of the vlan ids to the network as user don't > > have > > possibility to choose vlan id for network (at least that's default policy > > in > > Neutron). > > So, You should check what vlan id is used by such tenant network, check if > > this vlan is properly confiugred on Your switches and then, if all is > > good, > > check where exactly such dhcp requests are dropped (are they going from > > compute node? are packets comming to the node with dhcp agent?). Base on > > that > > You can narrow down where the issue can be. > > > > On sobota, 28 sierpnia 2021 18:07:07 CEST Karera Tony wrote: > > > How can i make them both vxlan and vlan > > > > > > On Sat, 28 Aug 2021, 17:43 Mohammed Naser, <mnaser@vexxhost.com> wrote: > > > > Your issue is that tenant_network_types should be vxlan. > > > > > > > > On Sat, Aug 28, 2021 at 4:29 AM Karera Tony < tonykarera@gmail.com> > > > > wrote: > > > >> Hello Team, > > > >> > > > >> Whenever I create an internal network with type vlan. > > > >> > > > >> The Instances don't get IPs but for external Network it's fine. > > > >> > > > >> > > > >> Below is the etc/kolla/config/neutron/ml2_conf.ini file > > > >> > > > >> > > > >> [ml2] > > > >> type_drivers = flat,vlan,vxlan > > > >> tenant_network_types = vlan,flat > > > >> mechanism_drivers = openvswitch,l2population > > > >> > > > >> [ml2_type_vlan] > > > >> network_vlan_ranges = physnet1 > > > >> [ml2_type_flat] > > > >> flat_networks = physnet1 > > > >> > > > >> Is there something I need to change or I need to configure the > > > > interface > > > > > >> differently > > > >> Regards > > > >> > > > >> Tony Karera > > > >> > > > >> > > > >> -- > > > > > > > > Mohammed Naser > > > > VEXXHOST, Inc. > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
Hello Ammad, Thanks a lot Bro. I will really really appreciate it. The issue was on the Fedore-coreos I was using. I changed to 33 and everything worked OK. Thanks a lot everyone. Regards Regards Tony Karera On Wed, Sep 8, 2021 at 6:59 PM Ammad Syed <syedammad83@gmail.com> wrote:
Can you confirm fedora coreOS version image you are using for magnum clusters ?
I suspect you are using 34. Which have trouble with kubelet service. You should use version 33 or 32.
You can check systemctl status kubelet service and its logs.
On Wed, Sep 8, 2021 at 9:33 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
Yes I can see.
I cant see the kubelet logs. Please advise on how to check the fcos Logs.
It is new to me unfortunately.
Regards
Tony Karera
On Wed, Sep 8, 2021 at 6:21 PM Ammad Syed <syedammad83@gmail.com> wrote:
You can see here that kubelet container is not started. You need to check kubelet logs.
Can you confirm which fcos version you are using?
Ammad On Wed, Sep 8, 2021 at 9:17 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Team,
I tried to create another cluster with tls enabled and below are the results for podman ps . I have also attached the heat logs
podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e9c028723166 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 34 minutes ago Up 34 minutes ago heat-container-agent 4128a9e951de k8s.gcr.io/hyperkube:v1.18.16 kube-apiserver --... 30 minutes ago Up 30 minutes ago kube-apiserver 5430aa72e7f0 k8s.gcr.io/hyperkube:v1.18.16 kube-controller-m... 30 minutes ago Up 30 minutes ago kube-controller-manager 8f2c76272916 k8s.gcr.io/hyperkube:v1.18.16 kube-scheduler --... 30 minutes ago Up 30 minutes ago kube-scheduler af5f1a661c65 quay.io/coreos/etcd:v3.4.6 /usr/local/bin/et... 30 minutes ago Up 30 minutes ago etcd
Regards
Tony Karera
On Wed, Sep 8, 2021 at 5:37 PM Ammad Syed <syedammad83@gmail.com> wrote:
Hi karera,
I don’t think your the error is related to network side. Its because kublet is not installed properly in your kubernetes master node. You need to check logs that why hyperkube services are not started.
You can check it in heat logs on mater node and you can check the status of kubelet container via podman ps commad.
Ammad On Wed, Sep 8, 2021 at 8:28 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Slawek,
Unfortunately I am not sure because all I did is configure the cluster. Nothing else.
Regards
Tony Karera
On Wed, Sep 8, 2021 at 3:31 PM Slawek Kaplonski <skaplons@redhat.com> wrote:
> Hi, > > On środa, 8 września 2021 15:16:57 CEST Karera Tony wrote: > > Hello Slawek, > > > > I really need your help. > > > > I have tried all I can but still the k8s-cluster gets stuck with > this log > > here > > > > root@k8s-cluster-1-vo5s5jvtjoa7-master-0 core]# cd > > /var/log/heat-config/heat-config-script/ > > [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# ls > > a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- > kube_cluster_c > > onfig-asfqq352zosg.log > > c42dcbf1-7812-49a2-b312-96c24225b407-k8s-cluster-1-vo5s5jvtjoa7- > kube_masters- > > waugw6vksw5p-0-vrt2gkfugedg-master_config-xv3vlmr27r46.log > > [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 heat-config-script]# cat > > a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- > kube_cluster_c > > onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 > > heat-config-script]# cat > > a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- > kube_cluster_c > > onfig-asfqq352zosg.log [root@k8s-cluster-1-vo5s5jvtjoa7-master-0 > > heat-config-script]# cat > > a70c65af-34e2-4071-be18-2e4d66f85cf6-k8s-cluster-1-vo5s5jvtjoa7- > kube_cluster_c > > onfig-asfqq352zosg.log Starting to run > kube-apiserver-to-kubelet-role > > + echo 'Waiting for Kubernetes API...' > > Waiting for Kubernetes API... > > ++ kubectl get --raw=/healthz > > The connection to the server localhost:8080 was refused - did you > specify > > the right host or port? > > + '[' ok = '' ']' > > + sleep 5 > > ++ kubectl get --raw=/healthz > > The connection to the server localhost:8080 was refused - did you > specify > > the right host or port? > > But why Your script is trying to connect to localhost? It's not the > VM address > I guess :) > > > + '[' ok = '' ']' > > + sleep 5 > > > > > > Regards > > > > Tony Karera > > > > > > > > > > On Mon, Aug 30, 2021 at 8:50 AM Slawek Kaplonski < > skaplons@redhat.com> > > > > wrote: > > > Hi, > > > > > > You can use vlan as tenant network too. But You have to remember > that vlan > > > is > > > provider network, which means that You need to have configured > vlans on > > > Your > > > network infrastructure too. And in case of tenant network, > Neutron will > > > automatically allocate one of the vlan ids to the network as > user don't > > > have > > > possibility to choose vlan id for network (at least that's > default policy > > > in > > > Neutron). > > > So, You should check what vlan id is used by such tenant > network, check if > > > this vlan is properly confiugred on Your switches and then, if > all is > > > good, > > > check where exactly such dhcp requests are dropped (are they > going from > > > compute node? are packets comming to the node with dhcp agent?). > Base on > > > that > > > You can narrow down where the issue can be. > > > > > > On sobota, 28 sierpnia 2021 18:07:07 CEST Karera Tony wrote: > > > > How can i make them both vxlan and vlan > > > > > > > > On Sat, 28 Aug 2021, 17:43 Mohammed Naser, < > mnaser@vexxhost.com> wrote: > > > > > Your issue is that tenant_network_types should be vxlan. > > > > > > > > > > On Sat, Aug 28, 2021 at 4:29 AM Karera Tony < > tonykarera@gmail.com> > > > > > > wrote: > > > > >> Hello Team, > > > > >> > > > > >> Whenever I create an internal network with type vlan. > > > > >> > > > > >> The Instances don't get IPs but for external Network it's > fine. > > > > >> > > > > >> > > > > >> Below is the etc/kolla/config/neutron/ml2_conf.ini file > > > > >> > > > > >> > > > > >> [ml2] > > > > >> type_drivers = flat,vlan,vxlan > > > > >> tenant_network_types = vlan,flat > > > > >> mechanism_drivers = openvswitch,l2population > > > > >> > > > > >> [ml2_type_vlan] > > > > >> network_vlan_ranges = physnet1 > > > > >> [ml2_type_flat] > > > > >> flat_networks = physnet1 > > > > >> > > > > >> Is there something I need to change or I need to configure > the > > > > > > interface > > > > > > > >> differently > > > > >> Regards > > > > >> > > > > >> Tony Karera > > > > >> > > > > >> > > > > >> -- > > > > > > > > > > Mohammed Naser > > > > > VEXXHOST, Inc. > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
participants (4)
-
Ammad Syed
-
Karera Tony
-
Mohammed Naser
-
Slawek Kaplonski