K8S initialization problem, who has encountered, solve! Timed out waiting for the condition

< H1 > [init] this might take a minute or longer if the control plane images have to be pulled < / H1 >

during initialization, the card owner will be used for a long time. Try k8s.gcr.io or a custom image address, and you can actually access it and pull down the image.
which great god has encountered such a question?

< H1 > docker image < / H1 >
[root@ip-10-0-0-110 ~]-sharp docker images | grep aliyun
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64                v1.11.2             46a3cd725628        5 days ago          97.8 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64   v1.11.2             38521457c799        5 days ago          155 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64            v1.11.2             821507941e9c        5 days ago          187 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64            v1.11.2             37a1403e6c1a        5 days ago          56.8 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                         1.1.3               b3b94275d97c        2 months ago        45.6 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64                      3.2.18              b8df3b177be2        4 months ago        219 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                           3.1                 da86e6ba6ca1        7 months ago        742 kB
[root@ip-10-0-0-110 ~]-sharp docker images | grep k8s.gcr.io
k8s.gcr.io/kube-proxy-amd64                                                         v1.11.2             46a3cd725628        5 days ago          97.8 MB
k8s.gcr.io/kube-controller-manager-amd64                                            v1.11.2             38521457c799        5 days ago          155 MB
k8s.gcr.io/kube-apiserver-amd64                                                     v1.11.2             821507941e9c        5 days ago          187 MB
k8s.gcr.io/kube-scheduler-amd64                                                     v1.11.2             37a1403e6c1a        5 days ago          56.8 MB
k8s.gcr.io/kube-controller-manager-amd64                                            v1.11.0             55b70b420785        6 weeks ago         155 MB
k8s.gcr.io/kube-proxy-amd64                                                         v1.11.0             1d3d7afd77d1        6 weeks ago         97.8 MB
k8s.gcr.io/kube-apiserver-amd64                                                     v1.11.0             214c48e87f58        6 weeks ago         187 MB
k8s.gcr.io/kube-scheduler-amd64                                                     v1.11.0             0e4a34a3b0e6        6 weeks ago         56.8 MB
k8s.gcr.io/coredns                                                                  1.1.3               b3b94275d97c        2 months ago        45.6 MB
k8s.gcr.io/etcd-amd64                                                               3.2.18              b8df3b177be2        4 months ago        219 MB
k8s.gcr.io/pause                                                                    3.1                 da86e6ba6ca1        7 months ago        742 kB
< H1 > kubeadm-config configuration < / H1 >
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServerCertSANs:
- "k8s-master-a25ae1a9a50c23a2.elb.cn-northwest-1.amazonaws.com.cn"
api:
    controlPlaneEndpoint: "k8s-master-a25ae1a9a50c23a2.elb.cn-northwest-1.amazonaws.com.cn:6443"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://10.0.0.110:2379"
      advertise-client-urls: "https://10.0.0.110:2379"
      listen-peer-urls: "https://10.0.0.110:2380"
      initial-advertise-peer-urls: "https://10.0.0.110:2380"
      initial-cluster: "ip-10-0-0-110=https://10.0.0.110:2380"
    serverCertSANs:
      - ip-10-0-1-110
      - 10.0.0.110
    peerCertSANs:
      - ip-10-0-0-110
      - 10.0.0.110
networking:
    -sharp This CIDR is a Calico default. Substitute or remove for your CNI provider.
    podSubnet: "10.244.0.0/16"
< H1 > initialize error < / H1 >
[root@ip-10-0-0-110 cnk8s]-sharp kubeadm init --config kubeadm-config_1.yaml
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
I0813 14:11:25.834061    6905 kernel_validator.go:81] Validating kernel version
I0813 14:11:25.834168    6905 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using "kubeadm config images pull"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ip-10-0-0-110 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-master-a25ae1a9a50c23a2.elb.cn-northwest-1.amazonaws.com.cn k8s-master-a25ae1a9a50c23a2.elb.cn-northwest-1.amazonaws.com.cn] and IPs [10.96.0.1 10.0.0.110]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [ip-10-0-0-110 localhost ip-10-0-1-110] and IPs [127.0.0.1 ::1 10.0.0.110]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ip-10-0-0-110 localhost ip-10-0-0-110] and IPs [10.0.0.110 127.0.0.1 ::1 10.0.0.110]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled

        Unfortunately, an error has occurred:
            timed out waiting for the condition

        This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
            - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                - registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.11.2
                - registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.11.2
                - registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.11.2
                - registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.18
                - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                  are downloaded locally and cached.

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - "systemctl status kubelet"
            - "journalctl -xeu kubelet"

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
        Here is one example how you may list all Kubernetes containers running in docker:
            - "docker ps -a | grep kube | grep -v pause"
            Once you have found the failing container, you can inspect its logs with:
            - "docker logs CONTAINERID"
couldn"t initialize a Kubernetes cluster
< H1 > Log details / var/log/messages < / H1 >
Aug 13 14:24:06 ip-10-0-0-110 kubelet: I0813 14:24:06.906118    7225 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 13 14:24:16 ip-10-0-0-110 kubelet: I0813 14:24:16.920127    7225 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 13 14:24:23 ip-10-0-0-110 kubelet: I0813 14:24:23.748598    7225 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 13 14:24:26 ip-10-0-0-110 kubelet: I0813 14:24:26.931509    7225 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
K8s
Apr.08,2021

has it been solved later, brother

  • The ingress, of K8s seems to be unable to modify the backend url.

    if the api provided by a service is service v1 module method resource I want an address https: service.xxx.com module method resource points directly to service v1 module method resource how do I configure it? ...

    K8s
    Mar.12,2021
  • How k8s invokes or provides services across namespaces

    how to invoke services across namespaces, do you need any permissions? or can services be provided across command spaces? How to configure it? I want to deploy generic services to a separate namespace, not one per space. ...

    K8s
    Mar.17,2021
  • How to use external certificates for ingress of K8s

    does ingress tls have to configure a certificate? my backend service has its own certificate and supports https,. So how to configure it? just like K8s own apiserver, how do you configure the ingress proxy apiserver? ...

    K8s
    Mar.19,2021
  • Kube-apiserver default port range

    I want to change the nodePort range of K8s and modify the startup parameters of kube-apiserver. however, when I added this parameter, I found that the api would not work as long as the port range was not 30000-32768. I have another environment, th...

    Mar.20,2021
  • How to exchange information between K8s node?

    now I have 8 node, programs running the same java program in K8s. if you send a message to someone at 10:00 every day, eight node will be sent eight times. but you only need to send it once. is there any way to set flag bits to allow 8 node messages...

    Apr.25,2021
  • Kubeadm init failed all the time in pull image?

    kubeadm init before I started the command, I had already set up the http and https agents, but there was still an error. What is the reason for this? [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image [k8s.gcr.io kube...

    May.27,2021
  • Why not use HPA directly to expand the capacity of kubernetes dns horizontally?

    problem description kubernetes dns horizontal expansion document: https: kubernetes.io docs ta. uses cluster-proportional-autoscaler ( https: github.com kubernetes.) Why not just use HPA? What s the difference between HPA and cluster-proportiona...

    Sep.29,2021
  • New expansion of K8S POD has no traffic

    The deployment goes something like this: a svc deploys NGINX, then a svc deploys fpm, and then NGINX proxies to fpm, through svcname to complete the project of accessing PHP with nginx. The problem now is: I expanded the pod of fpm by half, but found tha...

    K8s
    Dec.03,2021
MySQL Query : SELECT * FROM `codeshelper`.`v9_news` WHERE status=99 AND catid='6' ORDER BY rand() LIMIT 5
MySQL Error : Disk full (/tmp/#sql-temptable-64f5-1b3690f-2b389.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
MySQL Errno : 1021
Message : Disk full (/tmp/#sql-temptable-64f5-1b3690f-2b389.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
Need Help?