kube-system: Pod Warning FailedScheduling default-scheduler no nodes available to schedule pods











up vote
0
down vote

favorite












Why am I getting:



kube-system 1m 1h 245 kube-dns-fcd468cb-8fhg2.156899dbda62d287 Pod Warning FailedScheduling default-scheduler no nodes available to schedule pods



UPDATE - I've now migrated the entire cluster to us-west-2 rather than eu-west-1 so I can run the code out of the box to prevent introducing any errors. The tfstate file showed the correct EKS AMI is being referred to.



E.g.



720: "image_id": "ami-00c3b2d35bddd4f5c",



FWIW, I'm following along to https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html
and using the code it links to in Github - i.e. https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started



Note: looking in EC2 Instances, I can see 2 EKS nodes running with the correct AMI IDs.



==== UPDATES



Checking nodes:



kubectl get nodes
No resources found.


ssh into one of the nodes and running journalctl shows:



Nov 21 12:28:25 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:25.419465    4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Unauthorized
Nov 21 12:28:25 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:25.735882 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Unauthorized
Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:26.237953 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: W1121 12:28:26.418327 4417 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:26.418477 4417 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: n


Given Auth may be an issue I checked the Terraform code which seems to be correct. E.g.:



https://github.com/terraform-providers/terraform-provider-aws/blob/master/examples/eks-getting-started/outputs.tf#L9-L20



Any way I can test this in a bit more detail? Or any further suggestions?










share|improve this question




























    up vote
    0
    down vote

    favorite












    Why am I getting:



    kube-system 1m 1h 245 kube-dns-fcd468cb-8fhg2.156899dbda62d287 Pod Warning FailedScheduling default-scheduler no nodes available to schedule pods



    UPDATE - I've now migrated the entire cluster to us-west-2 rather than eu-west-1 so I can run the code out of the box to prevent introducing any errors. The tfstate file showed the correct EKS AMI is being referred to.



    E.g.



    720: "image_id": "ami-00c3b2d35bddd4f5c",



    FWIW, I'm following along to https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html
    and using the code it links to in Github - i.e. https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started



    Note: looking in EC2 Instances, I can see 2 EKS nodes running with the correct AMI IDs.



    ==== UPDATES



    Checking nodes:



    kubectl get nodes
    No resources found.


    ssh into one of the nodes and running journalctl shows:



    Nov 21 12:28:25 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:25.419465    4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Unauthorized
    Nov 21 12:28:25 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:25.735882 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Unauthorized
    Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:26.237953 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
    Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: W1121 12:28:26.418327 4417 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
    Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:26.418477 4417 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: n


    Given Auth may be an issue I checked the Terraform code which seems to be correct. E.g.:



    https://github.com/terraform-providers/terraform-provider-aws/blob/master/examples/eks-getting-started/outputs.tf#L9-L20



    Any way I can test this in a bit more detail? Or any further suggestions?










    share|improve this question


























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      Why am I getting:



      kube-system 1m 1h 245 kube-dns-fcd468cb-8fhg2.156899dbda62d287 Pod Warning FailedScheduling default-scheduler no nodes available to schedule pods



      UPDATE - I've now migrated the entire cluster to us-west-2 rather than eu-west-1 so I can run the code out of the box to prevent introducing any errors. The tfstate file showed the correct EKS AMI is being referred to.



      E.g.



      720: "image_id": "ami-00c3b2d35bddd4f5c",



      FWIW, I'm following along to https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html
      and using the code it links to in Github - i.e. https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started



      Note: looking in EC2 Instances, I can see 2 EKS nodes running with the correct AMI IDs.



      ==== UPDATES



      Checking nodes:



      kubectl get nodes
      No resources found.


      ssh into one of the nodes and running journalctl shows:



      Nov 21 12:28:25 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:25.419465    4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Unauthorized
      Nov 21 12:28:25 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:25.735882 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Unauthorized
      Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:26.237953 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
      Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: W1121 12:28:26.418327 4417 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
      Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:26.418477 4417 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: n


      Given Auth may be an issue I checked the Terraform code which seems to be correct. E.g.:



      https://github.com/terraform-providers/terraform-provider-aws/blob/master/examples/eks-getting-started/outputs.tf#L9-L20



      Any way I can test this in a bit more detail? Or any further suggestions?










      share|improve this question















      Why am I getting:



      kube-system 1m 1h 245 kube-dns-fcd468cb-8fhg2.156899dbda62d287 Pod Warning FailedScheduling default-scheduler no nodes available to schedule pods



      UPDATE - I've now migrated the entire cluster to us-west-2 rather than eu-west-1 so I can run the code out of the box to prevent introducing any errors. The tfstate file showed the correct EKS AMI is being referred to.



      E.g.



      720: "image_id": "ami-00c3b2d35bddd4f5c",



      FWIW, I'm following along to https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html
      and using the code it links to in Github - i.e. https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started



      Note: looking in EC2 Instances, I can see 2 EKS nodes running with the correct AMI IDs.



      ==== UPDATES



      Checking nodes:



      kubectl get nodes
      No resources found.


      ssh into one of the nodes and running journalctl shows:



      Nov 21 12:28:25 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:25.419465    4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Unauthorized
      Nov 21 12:28:25 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:25.735882 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Unauthorized
      Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:26.237953 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
      Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: W1121 12:28:26.418327 4417 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
      Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:26.418477 4417 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: n


      Given Auth may be an issue I checked the Terraform code which seems to be correct. E.g.:



      https://github.com/terraform-providers/terraform-provider-aws/blob/master/examples/eks-getting-started/outputs.tf#L9-L20



      Any way I can test this in a bit more detail? Or any further suggestions?







      kubernetes amazon-eks






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 21 at 14:54

























      asked Nov 19 at 19:55









      Snowcrash

      36.6k39131209




      36.6k39131209
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote













          I'm guessing you don't have any nodes registered on your cluster. Just because the EC2 nodes are not up, it doesn't mean that your cluster is able to use them. You can check with:



          $ kubectl get nodes


          Another possibility is that your nodes are available but they don't have enough resources (which is unlikely).



          Another possibility is that your nodes are tainted with something like this:



          $ kubectl taint node node1 key=value:NoSchedule


          You can check and remove it:



          $ kubectl describe node node1
          $ kubectl taint node node1 key:NoSchedule-


          Another possibility is that you have nodeSelector in your pod spec and you don't have the nodes labeled with that node selector. Check with:



          $ kubectl get nodes --show-labels





          share|improve this answer





















          • Yes, when I do a get nodes I get No resources found so it must be that the cluster can't use them or access them.
            – Snowcrash
            Nov 20 at 13:03













          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53381739%2fkube-system-pod-warning-failedscheduling-default-scheduler-no-nodes-available-t%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote













          I'm guessing you don't have any nodes registered on your cluster. Just because the EC2 nodes are not up, it doesn't mean that your cluster is able to use them. You can check with:



          $ kubectl get nodes


          Another possibility is that your nodes are available but they don't have enough resources (which is unlikely).



          Another possibility is that your nodes are tainted with something like this:



          $ kubectl taint node node1 key=value:NoSchedule


          You can check and remove it:



          $ kubectl describe node node1
          $ kubectl taint node node1 key:NoSchedule-


          Another possibility is that you have nodeSelector in your pod spec and you don't have the nodes labeled with that node selector. Check with:



          $ kubectl get nodes --show-labels





          share|improve this answer





















          • Yes, when I do a get nodes I get No resources found so it must be that the cluster can't use them or access them.
            – Snowcrash
            Nov 20 at 13:03

















          up vote
          1
          down vote













          I'm guessing you don't have any nodes registered on your cluster. Just because the EC2 nodes are not up, it doesn't mean that your cluster is able to use them. You can check with:



          $ kubectl get nodes


          Another possibility is that your nodes are available but they don't have enough resources (which is unlikely).



          Another possibility is that your nodes are tainted with something like this:



          $ kubectl taint node node1 key=value:NoSchedule


          You can check and remove it:



          $ kubectl describe node node1
          $ kubectl taint node node1 key:NoSchedule-


          Another possibility is that you have nodeSelector in your pod spec and you don't have the nodes labeled with that node selector. Check with:



          $ kubectl get nodes --show-labels





          share|improve this answer





















          • Yes, when I do a get nodes I get No resources found so it must be that the cluster can't use them or access them.
            – Snowcrash
            Nov 20 at 13:03















          up vote
          1
          down vote










          up vote
          1
          down vote









          I'm guessing you don't have any nodes registered on your cluster. Just because the EC2 nodes are not up, it doesn't mean that your cluster is able to use them. You can check with:



          $ kubectl get nodes


          Another possibility is that your nodes are available but they don't have enough resources (which is unlikely).



          Another possibility is that your nodes are tainted with something like this:



          $ kubectl taint node node1 key=value:NoSchedule


          You can check and remove it:



          $ kubectl describe node node1
          $ kubectl taint node node1 key:NoSchedule-


          Another possibility is that you have nodeSelector in your pod spec and you don't have the nodes labeled with that node selector. Check with:



          $ kubectl get nodes --show-labels





          share|improve this answer












          I'm guessing you don't have any nodes registered on your cluster. Just because the EC2 nodes are not up, it doesn't mean that your cluster is able to use them. You can check with:



          $ kubectl get nodes


          Another possibility is that your nodes are available but they don't have enough resources (which is unlikely).



          Another possibility is that your nodes are tainted with something like this:



          $ kubectl taint node node1 key=value:NoSchedule


          You can check and remove it:



          $ kubectl describe node node1
          $ kubectl taint node node1 key:NoSchedule-


          Another possibility is that you have nodeSelector in your pod spec and you don't have the nodes labeled with that node selector. Check with:



          $ kubectl get nodes --show-labels






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 19 at 22:17









          Rico

          24.6k94864




          24.6k94864












          • Yes, when I do a get nodes I get No resources found so it must be that the cluster can't use them or access them.
            – Snowcrash
            Nov 20 at 13:03




















          • Yes, when I do a get nodes I get No resources found so it must be that the cluster can't use them or access them.
            – Snowcrash
            Nov 20 at 13:03


















          Yes, when I do a get nodes I get No resources found so it must be that the cluster can't use them or access them.
          – Snowcrash
          Nov 20 at 13:03






          Yes, when I do a get nodes I get No resources found so it must be that the cluster can't use them or access them.
          – Snowcrash
          Nov 20 at 13:03




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53381739%2fkube-system-pod-warning-failedscheduling-default-scheduler-no-nodes-available-t%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          404 Error Contact Form 7 ajax form submitting

          How to know if a Active Directory user can login interactively

          TypeError: fit_transform() missing 1 required positional argument: 'X'