during initialization, the card owner will be used for a long time. Try k8s.gcr.io or a custom image address, and you can actually access it and pull down the image.
which great god has encountered such a question?
[root@ip-10-0-0-110 ~]-sharp docker images | grep aliyun
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64 v1.11.2 46a3cd725628 5 days ago 97.8 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64 v1.11.2 38521457c799 5 days ago 155 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64 v1.11.2 821507941e9c 5 days ago 187 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64 v1.11.2 37a1403e6c1a 5 days ago 56.8 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.1.3 b3b94275d97c 2 months ago 45.6 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64 3.2.18 b8df3b177be2 4 months ago 219 MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 7 months ago 742 kB
[root@ip-10-0-0-110 ~]-sharp docker images | grep k8s.gcr.io
k8s.gcr.io/kube-proxy-amd64 v1.11.2 46a3cd725628 5 days ago 97.8 MB
k8s.gcr.io/kube-controller-manager-amd64 v1.11.2 38521457c799 5 days ago 155 MB
k8s.gcr.io/kube-apiserver-amd64 v1.11.2 821507941e9c 5 days ago 187 MB
k8s.gcr.io/kube-scheduler-amd64 v1.11.2 37a1403e6c1a 5 days ago 56.8 MB
k8s.gcr.io/kube-controller-manager-amd64 v1.11.0 55b70b420785 6 weeks ago 155 MB
k8s.gcr.io/kube-proxy-amd64 v1.11.0 1d3d7afd77d1 6 weeks ago 97.8 MB
k8s.gcr.io/kube-apiserver-amd64 v1.11.0 214c48e87f58 6 weeks ago 187 MB
k8s.gcr.io/kube-scheduler-amd64 v1.11.0 0e4a34a3b0e6 6 weeks ago 56.8 MB
k8s.gcr.io/coredns 1.1.3 b3b94275d97c 2 months ago 45.6 MB
k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 4 months ago 219 MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 7 months ago 742 kB
< H1 > kubeadm-config configuration < / H1 >
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServerCertSANs:
- "k8s-master-a25ae1a9a50c23a2.elb.cn-northwest-1.amazonaws.com.cn"
api:
controlPlaneEndpoint: "k8s-master-a25ae1a9a50c23a2.elb.cn-northwest-1.amazonaws.com.cn:6443"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://10.0.0.110:2379"
advertise-client-urls: "https://10.0.0.110:2379"
listen-peer-urls: "https://10.0.0.110:2380"
initial-advertise-peer-urls: "https://10.0.0.110:2380"
initial-cluster: "ip-10-0-0-110=https://10.0.0.110:2380"
serverCertSANs:
- ip-10-0-1-110
- 10.0.0.110
peerCertSANs:
- ip-10-0-0-110
- 10.0.0.110
networking:
-sharp This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: "10.244.0.0/16"
< H1 > initialize error < / H1 >
[root@ip-10-0-0-110 cnk8s]-sharp kubeadm init --config kubeadm-config_1.yaml
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
I0813 14:11:25.834061 6905 kernel_validator.go:81] Validating kernel version
I0813 14:11:25.834168 6905 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using "kubeadm config images pull"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ip-10-0-0-110 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-master-a25ae1a9a50c23a2.elb.cn-northwest-1.amazonaws.com.cn k8s-master-a25ae1a9a50c23a2.elb.cn-northwest-1.amazonaws.com.cn] and IPs [10.96.0.1 10.0.0.110]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [ip-10-0-0-110 localhost ip-10-0-1-110] and IPs [127.0.0.1 ::1 10.0.0.110]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ip-10-0-0-110 localhost ip-10-0-0-110] and IPs [10.0.0.110 127.0.0.1 ::1 10.0.0.110]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.11.2
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.11.2
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.11.2
- registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- "systemctl status kubelet"
- "journalctl -xeu kubelet"
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- "docker ps -a | grep kube | grep -v pause"
Once you have found the failing container, you can inspect its logs with:
- "docker logs CONTAINERID"
couldn"t initialize a Kubernetes cluster
< H1 > Log details / var/log/messages < / H1 >
Aug 13 14:24:06 ip-10-0-0-110 kubelet: I0813 14:24:06.906118 7225 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 13 14:24:16 ip-10-0-0-110 kubelet: I0813 14:24:16.920127 7225 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 13 14:24:23 ip-10-0-0-110 kubelet: I0813 14:24:23.748598 7225 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 13 14:24:26 ip-10-0-0-110 kubelet: I0813 14:24:26.931509 7225 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach