E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 5005 (unattended-upgr) N: Be aware that removing the lock file is not a solution and may break your system. E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
NAME STATUS ROLES AGE VERSION k8s-master01 NotReady control-plane 103m v1.32.3 k8s-worker01 NotReady <none> 4m12s v1.32.3 k8s-worker02 NotReady <none> 4m12s v1.32.3
在Master节点检查集群Pod状态(按命名空间筛选)
1 2 3 4 5 6 7 8 9 10 11 12
root@k8s-master01:~# kubectl get pods -n kube-system
namespace/tigera-operator created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created serviceaccount/tigera-operator created clusterrole.rbac.authorization.k8s.io/tigera-operator created clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created deployment.apps/tigera-operator created
1 2 3 4 5 6 7 8
root@k8s-master01:~# kubectl get ns
NAME STATUS AGE default Active 116m kube-node-lease Active 116m kube-public Active 116m kube-system Active 116m tigera-operator Active 106s // 新增的pod
1 2 3 4
root@k8s-master01:~# kubectl get pods -n tigera-operator
NAME READY STATUS RESTARTS AGE tigera-operator-ccfc44587-vswlt 0/1 ImagePullBackOff 0 3m44s
# This section includes base Calico installation configuration. # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation apiVersion:operator.tigera.io/v1 kind:Installation metadata: name:default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: -blockSize:26 cidr:10.244.0.0/16# 修改此行内容为初始化时定义的pod network cidr encapsulation:VXLANCrossSubnet natOutgoing:Enabled nodeSelector:all()
--- # This section configures the Calico API server. # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer apiVersion:operator.tigera.io/v1 kind:APIServer metadata: name:default spec: {}
NAME STATUS ROLES AGE VERSION k8s-master01 Ready control-plane 3h40m v1.32.3 k8s-worker01 Ready <none> 121m v1.32.3 k8s-worker02 Ready <none> 121m v1.32.3
查看pod状态
1 2 3 4 5 6 7 8 9 10 11 12
root@k8s-master01:~# kubectl get pods -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h11m nginxweb-service NodePort 10.106.65.218 <none> 80:30080/TCP 10m root@k8s-master01:~# curl http://10.106.65.218 // 验证在K8S集群内布访问
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>
<p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p> </body> </html>
- helm search: search for charts - helm pull: download a chart to your local directory to view - helm install: upload the chart to Kubernetes - helm list: list releases of charts
Environment variables:
| Name | Description | |------------------------------------|------------------------------------------------------------------------------------------------------------| | $HELM_CACHE_HOME | set an alternative location for storing cached files. | | $HELM_CONFIG_HOME | set an alternative location for storing Helm configuration. | | $HELM_DATA_HOME | set an alternative location for storing Helm data. | | $HELM_DEBUG | indicate whether or not Helm is running in Debug mode | | $HELM_DRIVER | set the backend storage driver. Values are: configmap, secret, memory, sql. | | $HELM_DRIVER_SQL_CONNECTION_STRING | set the connection string the SQL storage driver should use. | | $HELM_MAX_HISTORY | set the maximum number of helm release history. | | $HELM_NAMESPACE | set the namespace used for the helm operations. | | $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. | | $HELM_PLUGINS | set the path to the plugins directory | | $HELM_REGISTRY_CONFIG | set the path to the registry config file. | | $HELM_REPOSITORY_CACHE | set the path to the repository cache directory | | $HELM_REPOSITORY_CONFIG | set the path to the repositories file. | | $KUBECONFIG | set an alternative Kubernetes configuration file (default "~/.kube/config") | | $HELM_KUBEAPISERVER | set the Kubernetes API Server Endpoint for authentication | | $HELM_KUBECAFILE | set the Kubernetes certificate authority file. | | $HELM_KUBEASGROUPS | set the Groups to use for impersonation using a comma-separated list. | | $HELM_KUBEASUSER | set the Username to impersonate for the operation. | | $HELM_KUBECONTEXT | set the name of the kubeconfig context. | | $HELM_KUBETOKEN | set the Bearer KubeToken used for authentication. | | $HELM_KUBEINSECURE_SKIP_TLS_VERIFY | indicate if the Kubernetes API server's certificate validation should be skipped (insecure) | | $HELM_KUBETLS_SERVER_NAME | set the server name used to validate the Kubernetes API server certificate | | $HELM_BURST_LIMIT | set the default burst limit in the case the server contains many CRDs (default 100, -1 to disable) | | $HELM_QPS | set the Queries Per Second in cases where a high number of calls exceed the option for higher burst values |
Helm stores cache, configuration, and data based on the following configuration order:
- If a HELM_*_HOME environment variable is set, it will be used - Otherwise, on systems supporting the XDG base directory specification, the XDG variables will be used - When no other location is set a default location will be used based on the operating system
By default, the default directories depend on the Operating System. The defaults are listed below:
Available Commands: completion generate autocompletion scripts for the specified shell create create a new chart with the given name dependency manage a chart's dependencies env helm client environment information get download extended information of a named release help Help about any command history fetch release history install install a chart lint examine a chart for possible issues list list releases package package a chart directory into a chart archive plugin install, list, or uninstall Helm plugins pull download a chart from a repository and (optionally) unpack it inlocal directory push push a chart to remote registry login to or logout from a registry repo add, list, remove, update, and index chart repositories rollback roll back a release to a previous revision search search for a keyword in charts show show information of a chart status display the status of the named release template locally render templates test run tests for a release uninstall uninstall a release upgrade upgrade a release verify verify that a chart at the given path has been signed and is valid version print the client version information
Flags: --burst-limit int client-side default throttling limit (default 100) --debug enable verbose output -h, --helphelpfor helm --kube-apiserver string the address and the port for the Kubernetes API server --kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups. --kube-as-user string username to impersonate for the operation --kube-ca-file string the certificate authority file for the Kubernetes API server connection --kube-context string name of the kubeconfig context to use --kube-insecure-skip-tls-verify iftrue, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kube-tls-server-name string server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used --kube-token string bearer token used for authentication --kubeconfig string path to the kubeconfig file -n, --namespace string namespace scope for this request --qps float32 queries per second used when communicating with the Kubernetes API, not including bursting --registry-config string path to the registry config file (default "/root/.config/helm/registry/config.json") --repository-cache string path to the directory containing cached repository indexes (default "/root/.cache/helm/repository") --repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
Use "helm [command] --help" for more information about a command. root@k8s-master01:~#
Release "kubernetes-dashboard" does not exist. Installing it now. NAME: kubernetes-dashboard LAST DEPLOYED: Sun Mar 23 17:44:31 2025 NAMESPACE: kubernetes-dashboard STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: ************************************************************************************************* *** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready *** *************************************************************************************************
Congratulations! You have just installed Kubernetes Dashboard in your cluster.
To access Dashboard run: kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
NOTE: In case port-forward command does not work, make sure that kong service name is correct. Check the services in Kubernetes Dashboard namespace using: kubectl -n kubernetes-dashboard get svc
Dashboard will be available at: https://localhost:8443 root@k8s-master01:~#
Mar 22 13:26:21 k8s-master01 kubelet[9760]: E0322 13:26:21.545465 9760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-k8s-master01_kube-system(19dfb154dd163834eef61b7252703fed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-k8s-master01_kube-system(19dfb154dd163834eef61b7252703fed)\\\": rpc error: code = Unknown desc = failed to start sandbox \\\"623e76469b34076613d313938932728b1c6e63d0951c4c1a1a7563332c35b2e2\\\": failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/623e76469b34076613d313938932728b1c6e63d0951c4c1a1a7563332c35b2e2/log.json: no such file or directory): exec: \\\"runc\\\": executable file not found in $PATH\"" pod="kube-system/kube-controller-manager-k8s-master01" podUID="19dfb154dd163834eef61b7252703fed" Mar 22 13:26:23 k8s-master01 kubelet[9760]: E0322 13:26:23.120378 9760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.11.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master01?timeout=10s\": dial tcp 192.168.11.161:6443: connect: connection refused" interval="7s" Mar 22 13:26:23 k8s-master01 kubelet[9760]: I0322 13:26:23.327532 9760 kubelet_node_status.go:75] "Attempting to register node" node="k8s-master01" Mar 22 13:26:23 k8s-master01 kubelet[9760]: E0322 13:26:23.328029 9760 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.11.161:6443/api/v1/nodes\": dial tcp 192.168.11.161:6443: connect: connection refused" node="k8s-master01" Mar 22 13:26:25 k8s-master01 kubelet[9760]: W0322 13:26:25.039914 9760 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.11.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.11.161:6443: connect: connection refused Mar 22 13:26:25 k8s-master01 kubelet[9760]: E0322 13:26:25.040030 9760 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.11.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.11.161:6443: connect: connection refused" logger="UnhandledError" Mar 22 13:26:26 k8s-master01 kubelet[9760]: W0322 13:26:26.039053 9760 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.11.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.11.161:6443: connect: connection refused Mar 22 13:26:26 k8s-master01 kubelet[9760]: E0322 13:26:26.039171 9760 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.11.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.11.161:6443: connect: connection refused" logger="UnhandledError" Mar 22 13:26:26 k8s-master01 kubelet[9760]: E0322 13:26:26.450839 9760 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"k8s-master01\" not found" node="k8s-master01" Mar 22 13:26:26 k8s-master01 kubelet[9760]: E0322 13:26:26.451717 9760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 119.29.29.29 114.114.114.114 8.8.8.8" Mar 22 13:26:26 k8s-master01 kubelet[9760]: E0322 13:26:26.547499 9760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to start sandbox \"c54802a9d762ba695f68d28021ea9339eb8e9845d41161d6fa1f9bfb4d811c54\": failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c54802a9d762ba695f68d28021ea9339eb8e9845d41161d6fa1f9bfb4d811c54/log.json: no such file or directory): exec: \"runc\": executable file not found in $PATH" Mar 22 13:26:26 k8s-master01 kubelet[9760]: E0322 13:26:26.547575 9760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to start sandbox \"c54802a9d762ba695f68d28021ea9339eb8e9845d41161d6fa1f9bfb4d811c54\": failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c54802a9d762ba695f68d28021ea9339eb8e9845d41161d6fa1f9bfb4d811c54/log.json: no such file or directory): exec: \"runc\": executable file not found in $PATH" pod="kube-system/kube-apiserver-k8s-master01" Mar 22 13:26:26 k8s-master01 kubelet[9760]: E0322 13:26:26.547611 9760 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to start sandbox \"c54802a9d762ba695f68d28021ea9339eb8e9845d41161d6fa1f9bfb4d811c54\": failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c54802a9d762ba695f68d28021ea9339eb8e9845d41161d6fa1f9bfb4d811c54/log.json: no such file or directory): exec: \"runc\": executable file not found in $PATH" pod="kube-system/kube-apiserver-k8s-master01" Mar 22 13:26:26 k8s-master01 kubelet[9760]: E0322 13:26:26.547680 9760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-k8s-master01_kube-system(843638c9eef6e3d63d22d53c9d990887)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-k8s-master01_kube-system(843638c9eef6e3d63d22d53c9d990887)\\\": rpc error: code = Unknown desc = failed to start sandbox \\\"c54802a9d762ba695f68d28021ea9339eb8e9845d41161d6fa1f9bfb4d811c54\\\": failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c54802a9d762ba695f68d28021ea9339eb8e9845d41161d6fa1f9bfb4d811c54/log.json: no such file or directory): exec: \\\"runc\\\": executable file not found in $PATH\"" pod="kube-system/kube-apiserver-k8s-master01" podUID="843638c9eef6e3d63d22d53c9d990887" Mar 22 13:26:29 k8s-master01 kubelet[9760]: E0322 13:26:29.498553 9760 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"k8s-master01\" not found" Mar 22 13:26:30 k8s-master01 kubelet[9760]: E0322 13:26:30.122192 9760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.11.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master01?timeout=10s\": dial tcp 192.168.11.161:6443: connect: connection refused" interval="7s" Mar 22 13:26:30 k8s-master01 kubelet[9760]: I0322 13:26:30.329597 9760 kubelet_node_status.go:75] "Attempting to register node" node="k8s-master01" Mar 22 13:26:30 k8s-master01 kubelet[9760]: E0322 13:26:30.330070 9760 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.11.161:6443/api/v1/nodes\": dial tcp 192.168.11.161:6443: connect: connection refused" node="k8s-master01" Mar 22 13:26:30 k8s-master01 kubelet[9760]: E0322 13:26:30.450085 9760 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"k8s-master01\" not found" node="k8s-master01" Mar 22 13:26:30 k8s-master01 kubelet[9760]: E0322 13:26:30.450964 9760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 119.29.29.29 114.114.114.114 8.8.8.8" Mar 22 13:26:30 k8s-master01 kubelet[9760]: E0322 13:26:30.548988 9760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to start sandbox \"f8b4188ebe49ae964cfbbb1837c3899194dede9e323889d10636e4002993581b\": failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f8b4188ebe49ae964cfbbb1837c3899194dede9e323889d10636e4002993581b/log.json: no such file or directory): exec: \"runc\": executable file not found in $PATH" Mar 22 13:26:30 k8s-master01 kubelet[9760]: E0322 13:26:30.549096 9760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to start sandbox \"f8b4188ebe49ae964cfbbb1837c3899194dede9e323889d10636e4002993581b\": failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f8b4188ebe49ae964cfbbb1837c3899194dede9e323889d10636e4002993581b/log.json: no such file or directory): exec: \"runc\": executable file not found in $PATH" pod="kube-system/kube-scheduler-k8s-master01" Mar 22 13:26:30 k8s-master01 kubelet[9760]: E0322 13:26:30.549145 9760 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to start sandbox \"f8b4188ebe49ae964cfbbb1837c3899194dede9e323889d10636e4002993581b\": failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f8b4188ebe49ae964cfbbb1837c3899194dede9e323889d10636e4002993581b/log.json: no such file or directory): exec: \"runc\": executable file not found in $PATH" pod="kube-system/kube-scheduler-k8s-master01" Mar 22 13:26:30 k8s-master01 kubelet[9760]: E0322 13:26:30.549256 9760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-k8s-master01_kube-system(c7ec3024ac27f675ae60ec998a90ce85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-k8s-master01_kube-system(c7ec3024ac27f675ae60ec998a90ce85)\\\": rpc error: code = Unknown desc = failed to start sandbox \\\"f8b4188ebe49ae964cfbbb1837c3899194dede9e323889d10636e4002993581b\\\": failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f8b4188ebe49ae964cfbbb1837c3899194dede9e323889d10636e4002993581b/log.json: no such file or directory): exec: \\\"runc\\\": executable file not found in $PATH\"" pod="kube-system/kube-scheduler-k8s-master01" podUID="c7ec3024ac27f675ae60ec998a90ce85" Mar 22 13:26:31 k8s-master01 kubelet[9760]: E0322 13:26:31.073491 9760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.11.161:6443/api/v1/namespaces/default/events\": dial tcp 192.168.11.161:6443: connect: connection refused" event="&Event{ObjectMeta:{k8s-master01.182f0852158da106 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:k8s-master01,UID:k8s-master01,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node k8s-master01 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:k8s-master01,},FirstTimestamp:2025-03-22 13:18:49.45761511 +0800 CST m=+0.101712151,LastTimestamp:2025-03-22 13:18:49.45761511 +0800 CST m=+0.101712151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:k8s-master01,}"
7.1.4 Kubernets集群中部署calico网络插件时kubectl get pods -n tigera-operator命令一直显示pod为ImagePullBackOff状态
1
~# kubectl describe pod tigera-operator-ccfc44587-vswlt -n tigera-operator