kubernetes cluster master node not ready -


i not know why ,my master node in not ready status,all pods on cluster run normally, , use cabernets v1.7.5 ,and network plugin use calico,and os version "centos7.2.1511"

# kubectl nodes name        status     age       version k8s-node1   ready      1h        v1.7.5 k8s-node2   notready   1h        v1.7.5     # kubectl --all-namespaces namespace     name                                           ready     status    restarts   age kube-system   po/calico-node-11kvm                           2/2       running   0          33m kube-system   po/calico-policy-controller-1906845835-1nqjj   1/1       running   0          33m kube-system   po/calicoctl                                   1/1       running   0          33m kube-system   po/etcd-k8s-node2                              1/1       running   1          15m kube-system   po/kube-apiserver-k8s-node2                    1/1       running   1          15m kube-system   po/kube-controller-manager-k8s-node2           1/1       running   2          15m kube-system   po/kube-dns-2425271678-2mh46                   3/3       running   0          1h kube-system   po/kube-proxy-qlmbx                            1/1       running   1          1h kube-system   po/kube-proxy-vwh6l                            1/1       running   0          1h kube-system   po/kube-scheduler-k8s-node2                    1/1       running   2          15m  namespace     name             cluster-ip   external-ip   port(s)         age default       svc/kubernetes   10.96.0.1    <none>        443/tcp         1h kube-system   svc/kube-dns     10.96.0.10   <none>        53/udp,53/tcp   1h  namespace     name                              desired   current   up-to-date   available   age kube-system   deploy/calico-policy-controller   1         1         1            1           33m kube-system   deploy/kube-dns                   1         1         1            1           1h  namespace     name                                     desired   current   ready     age kube-system   rs/calico-policy-controller-1906845835   1         1         1         33m kube-system   rs/kube-dns-2425271678                   1         1         1         1h 

update

it seems master node can not recognize calico network plugin, use kubeadm install k8s cluster ,due kubeadm start etcd on 127.0.0.1:2379 on master node,and calico on other nodes can not talk etcd,so modify etcd.yaml following ,and calico pods run fine, not familiar calico ,how fix ?

apiversion: v1 kind: pod metadata:   annotations:     scheduler.alpha.kubernetes.io/critical-pod: ""   creationtimestamp: null   labels:     component: etcd     tier: control-plane   name: etcd   namespace: kube-system spec:   containers:   - command:     - etcd     - --listen-client-urls=http://127.0.0.1:2379,http://10.161.233.80:2379     - --advertise-client-urls=http://10.161.233.80:2379     - --data-dir=/var/lib/etcd     image: gcr.io/google_containers/etcd-amd64:3.0.17     livenessprobe:       failurethreshold: 8       httpget:         host: 127.0.0.1         path: /health         port: 2379         scheme: http       initialdelayseconds: 15       timeoutseconds: 15     name: etcd     resources: {}     volumemounts:     - mountpath: /etc/ssl/certs       name: certs     - mountpath: /var/lib/etcd       name: etcd     - mountpath: /etc/kubernetes       name: k8s       readonly: true   hostnetwork: true   volumes:   - hostpath:       path: /etc/ssl/certs     name: certs   - hostpath:       path: /var/lib/etcd     name: etcd   - hostpath:       path: /etc/kubernetes     name: k8s status: {}  [root@k8s-node2 calico]# kubectl describe node k8s-node2 name:                   k8s-node2 role: labels:                 beta.kubernetes.io/arch=amd64                         beta.kubernetes.io/os=linux                         kubernetes.io/hostname=k8s-node2                         node-role.kubernetes.io/master= annotations:            node.alpha.kubernetes.io/ttl=0                         volumes.kubernetes.io/controller-managed-attach-detach=true taints:                 node-role.kubernetes.io/master:noschedule creationtimestamp:      tue, 12 sep 2017 15:20:57 +0800 conditions:   type                  status  lastheartbeattime                       lasttransitiontime                      reason                          message   ----                  ------  -----------------                       ------------------                      ------                          -------   outofdisk             false   wed, 13 sep 2017 10:25:58 +0800         tue, 12 sep 2017 15:20:57 +0800         kubelethassufficientdisk        kubelet has sufficient disk space available   memorypressure        false   wed, 13 sep 2017 10:25:58 +0800         tue, 12 sep 2017 15:20:57 +0800         kubelethassufficientmemory      kubelet has sufficient memory available   diskpressure          false   wed, 13 sep 2017 10:25:58 +0800         tue, 12 sep 2017 15:20:57 +0800         kubelethasnodiskpressure        kubelet has no disk pressure   ready                 false   wed, 13 sep 2017 10:25:58 +0800         tue, 12 sep 2017 15:20:57 +0800         kubeletnotready                 runtime network not ready: networkready=false reason:networkpluginnotready message:docker: network plugin not ready: cni config uninitialized addresses:   internalip:   10.161.233.80   hostname:     k8s-node2 capacity:  cpu:           2  memory:        3618520ki  pods:          110 allocatable:  cpu:           2  memory:        3516120ki  pods:          110 system info:  machine id:                    3c6ff97c6fbe4598b53fd04e08937468  system uuid:                   c6238bf8-8e60-4331-aeea-6d0ba9106344  boot id:                       84397607-908f-4ff8-8bdc-ff86c364dd32  kernel version:                3.10.0-514.6.2.el7.x86_64  os image:                      centos linux 7 (core)  operating system:              linux  architecture:                  amd64  container runtime version:     docker://1.12.6  kubelet version:               v1.7.5  kube-proxy version:            v1.7.5 podcidr:                        10.68.0.0/24 externalid:                     k8s-node2 non-terminated pods:            (5 in total)   namespace                     name                                            cpu requests    cpu limits      memory requests memory limits   ---------                     ----                                            ------------    ----------      --------------- -------------   kube-system                   etcd-k8s-node2                                  0 (0%)          0 (0%)          0 (0%)          0 (0%)   kube-system                   kube-apiserver-k8s-node2                        250m (12%)      0 (0%)          0 (0%)          0 (0%)   kube-system                   kube-controller-manager-k8s-node2               200m (10%)      0 (0%)          0 (0%)          0 (0%)   kube-system                   kube-proxy-qlmbx                                0 (0%)          0 (0%)          0 (0%)          0 (0%)   kube-system                   kube-scheduler-k8s-node2                        100m (5%)       0 (0%)          0 (0%)          0 (0%) allocated resources:   (total limits may on 100 percent, i.e., overcommitted.)   cpu requests  cpu limits      memory requests memory limits   ------------  ----------      --------------- -------------   550m (27%)    0 (0%)          0 (0%)          0 (0%) events:         <none> 

it's practice run describe command in other see what's wrong node:

kubectl describe nodes <node_name> 

e.g.: kubectl describe nodes k8s-node2 should able start investigations there , add more info question if needed.


Comments

Popular posts from this blog

angular - Ionic slides - dynamically add slides before and after -

minify - Minimizing css files -

Add a dynamic header in angular 2 http provider -