Kubernetes Autoscaling with Memory Not Working, But working for CPU












0














The yaml I used is shown below



    apiVersion: v1
kind: Service
metadata:
name: xxx-svc
labels:
app: xxxxxx
spec:
type: NodePort
ports:
- port: 8080
selector:
app: xxxxxx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-xxx
labels:
app: xxxxxx
spec:
selector:
matchLabels:
app: xxxxxx
template:
metadata:
labels:
app: xxxxxx
spec:
containers:
- name: xxxxxx
image: yyy/xxxxxx:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: "100m"
memory: "504Mi"
limits:
cpu: "100m"
memory: "504Mi"
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: xxxxxx
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-xxx
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: Value
averageValue: 500Mi


Service, HPA, Deployment everything deployed successfully, but when I check hpa (kubectl get hpa) I am getting below result



NAME        REFERENCE              TARGETS                   MINPODS   
MAXPODS REPLICAS AGE

xxxxxx Deployment/my-xxx unknown/500Mi, 1%/50% 1 3 3 69m


The reason I got (kubectl describe hpa) is




Warning FailedComputeMetricsReplicas 21m (x4 over 22m) horizontal-pod-autoscaler failed to get memory utilization: missing request for memory




What might be the reason that memory is Unknown but CPU is working










share|improve this question





























    0














    The yaml I used is shown below



        apiVersion: v1
    kind: Service
    metadata:
    name: xxx-svc
    labels:
    app: xxxxxx
    spec:
    type: NodePort
    ports:
    - port: 8080
    selector:
    app: xxxxxx
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: my-xxx
    labels:
    app: xxxxxx
    spec:
    selector:
    matchLabels:
    app: xxxxxx
    template:
    metadata:
    labels:
    app: xxxxxx
    spec:
    containers:
    - name: xxxxxx
    image: yyy/xxxxxx:latest
    ports:
    - containerPort: 8080
    resources:
    requests:
    cpu: "100m"
    memory: "504Mi"
    limits:
    cpu: "100m"
    memory: "504Mi"
    ---
    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    metadata:
    name: xxxxxx
    spec:
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-xxx
    minReplicas: 1
    maxReplicas: 3
    metrics:
    - type: Resource
    resource:
    name: cpu
    target:
    type: Utilization
    averageUtilization: 50
    - type: Resource
    resource:
    name: memory
    target:
    type: Value
    averageValue: 500Mi


    Service, HPA, Deployment everything deployed successfully, but when I check hpa (kubectl get hpa) I am getting below result



    NAME        REFERENCE              TARGETS                   MINPODS   
    MAXPODS REPLICAS AGE

    xxxxxx Deployment/my-xxx unknown/500Mi, 1%/50% 1 3 3 69m


    The reason I got (kubectl describe hpa) is




    Warning FailedComputeMetricsReplicas 21m (x4 over 22m) horizontal-pod-autoscaler failed to get memory utilization: missing request for memory




    What might be the reason that memory is Unknown but CPU is working










    share|improve this question



























      0












      0








      0







      The yaml I used is shown below



          apiVersion: v1
      kind: Service
      metadata:
      name: xxx-svc
      labels:
      app: xxxxxx
      spec:
      type: NodePort
      ports:
      - port: 8080
      selector:
      app: xxxxxx
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: my-xxx
      labels:
      app: xxxxxx
      spec:
      selector:
      matchLabels:
      app: xxxxxx
      template:
      metadata:
      labels:
      app: xxxxxx
      spec:
      containers:
      - name: xxxxxx
      image: yyy/xxxxxx:latest
      ports:
      - containerPort: 8080
      resources:
      requests:
      cpu: "100m"
      memory: "504Mi"
      limits:
      cpu: "100m"
      memory: "504Mi"
      ---
      apiVersion: autoscaling/v2beta2
      kind: HorizontalPodAutoscaler
      metadata:
      name: xxxxxx
      spec:
      scaleTargetRef:
      apiVersion: apps/v1
      kind: Deployment
      name: my-xxx
      minReplicas: 1
      maxReplicas: 3
      metrics:
      - type: Resource
      resource:
      name: cpu
      target:
      type: Utilization
      averageUtilization: 50
      - type: Resource
      resource:
      name: memory
      target:
      type: Value
      averageValue: 500Mi


      Service, HPA, Deployment everything deployed successfully, but when I check hpa (kubectl get hpa) I am getting below result



      NAME        REFERENCE              TARGETS                   MINPODS   
      MAXPODS REPLICAS AGE

      xxxxxx Deployment/my-xxx unknown/500Mi, 1%/50% 1 3 3 69m


      The reason I got (kubectl describe hpa) is




      Warning FailedComputeMetricsReplicas 21m (x4 over 22m) horizontal-pod-autoscaler failed to get memory utilization: missing request for memory




      What might be the reason that memory is Unknown but CPU is working










      share|improve this question















      The yaml I used is shown below



          apiVersion: v1
      kind: Service
      metadata:
      name: xxx-svc
      labels:
      app: xxxxxx
      spec:
      type: NodePort
      ports:
      - port: 8080
      selector:
      app: xxxxxx
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: my-xxx
      labels:
      app: xxxxxx
      spec:
      selector:
      matchLabels:
      app: xxxxxx
      template:
      metadata:
      labels:
      app: xxxxxx
      spec:
      containers:
      - name: xxxxxx
      image: yyy/xxxxxx:latest
      ports:
      - containerPort: 8080
      resources:
      requests:
      cpu: "100m"
      memory: "504Mi"
      limits:
      cpu: "100m"
      memory: "504Mi"
      ---
      apiVersion: autoscaling/v2beta2
      kind: HorizontalPodAutoscaler
      metadata:
      name: xxxxxx
      spec:
      scaleTargetRef:
      apiVersion: apps/v1
      kind: Deployment
      name: my-xxx
      minReplicas: 1
      maxReplicas: 3
      metrics:
      - type: Resource
      resource:
      name: cpu
      target:
      type: Utilization
      averageUtilization: 50
      - type: Resource
      resource:
      name: memory
      target:
      type: Value
      averageValue: 500Mi


      Service, HPA, Deployment everything deployed successfully, but when I check hpa (kubectl get hpa) I am getting below result



      NAME        REFERENCE              TARGETS                   MINPODS   
      MAXPODS REPLICAS AGE

      xxxxxx Deployment/my-xxx unknown/500Mi, 1%/50% 1 3 3 69m


      The reason I got (kubectl describe hpa) is




      Warning FailedComputeMetricsReplicas 21m (x4 over 22m) horizontal-pod-autoscaler failed to get memory utilization: missing request for memory




      What might be the reason that memory is Unknown but CPU is working







      docker kubernetes






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 21 at 8:59









      Prafull Ladha

      2,167316




      2,167316










      asked Nov 21 at 8:23









      JibinNajeeb

      10710




      10710
























          1 Answer
          1






          active

          oldest

          votes


















          0














          The reason for this:




          Warning FailedComputeMetricsReplicas 21m (x4 over 22m)
          horizontal-pod-autoscaler failed to get memory utilization: missing
          request for memory




          Kubernetes HPA does not work by default with memory you need to create custom metric for memory and then use it.
          I found some additional information here how people try to solve same issue.




          Pod Memory Based AutoScaling



          In this section, we are discussing how you can deploy autoscaling on
          the basis of memory that pods are consuming. We have used the command
          kubectl top pod” to get the utilized pod memory and applied the
          logic.




          • Get the average pod memory of the running pods: Execute the script as follows:







          share|improve this answer





















          • I think the script is too old. Some errors are occuring. Also In kubernetes documentation they mentioned, memory and cpu is supported by default(kubernetes.io/docs/tasks/run-application/…)
            – JibinNajeeb
            Nov 23 at 5:18











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53407841%2fkubernetes-autoscaling-with-memory-not-working-but-working-for-cpu%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          The reason for this:




          Warning FailedComputeMetricsReplicas 21m (x4 over 22m)
          horizontal-pod-autoscaler failed to get memory utilization: missing
          request for memory




          Kubernetes HPA does not work by default with memory you need to create custom metric for memory and then use it.
          I found some additional information here how people try to solve same issue.




          Pod Memory Based AutoScaling



          In this section, we are discussing how you can deploy autoscaling on
          the basis of memory that pods are consuming. We have used the command
          kubectl top pod” to get the utilized pod memory and applied the
          logic.




          • Get the average pod memory of the running pods: Execute the script as follows:







          share|improve this answer





















          • I think the script is too old. Some errors are occuring. Also In kubernetes documentation they mentioned, memory and cpu is supported by default(kubernetes.io/docs/tasks/run-application/…)
            – JibinNajeeb
            Nov 23 at 5:18
















          0














          The reason for this:




          Warning FailedComputeMetricsReplicas 21m (x4 over 22m)
          horizontal-pod-autoscaler failed to get memory utilization: missing
          request for memory




          Kubernetes HPA does not work by default with memory you need to create custom metric for memory and then use it.
          I found some additional information here how people try to solve same issue.




          Pod Memory Based AutoScaling



          In this section, we are discussing how you can deploy autoscaling on
          the basis of memory that pods are consuming. We have used the command
          kubectl top pod” to get the utilized pod memory and applied the
          logic.




          • Get the average pod memory of the running pods: Execute the script as follows:







          share|improve this answer





















          • I think the script is too old. Some errors are occuring. Also In kubernetes documentation they mentioned, memory and cpu is supported by default(kubernetes.io/docs/tasks/run-application/…)
            – JibinNajeeb
            Nov 23 at 5:18














          0












          0








          0






          The reason for this:




          Warning FailedComputeMetricsReplicas 21m (x4 over 22m)
          horizontal-pod-autoscaler failed to get memory utilization: missing
          request for memory




          Kubernetes HPA does not work by default with memory you need to create custom metric for memory and then use it.
          I found some additional information here how people try to solve same issue.




          Pod Memory Based AutoScaling



          In this section, we are discussing how you can deploy autoscaling on
          the basis of memory that pods are consuming. We have used the command
          kubectl top pod” to get the utilized pod memory and applied the
          logic.




          • Get the average pod memory of the running pods: Execute the script as follows:







          share|improve this answer












          The reason for this:




          Warning FailedComputeMetricsReplicas 21m (x4 over 22m)
          horizontal-pod-autoscaler failed to get memory utilization: missing
          request for memory




          Kubernetes HPA does not work by default with memory you need to create custom metric for memory and then use it.
          I found some additional information here how people try to solve same issue.




          Pod Memory Based AutoScaling



          In this section, we are discussing how you can deploy autoscaling on
          the basis of memory that pods are consuming. We have used the command
          kubectl top pod” to get the utilized pod memory and applied the
          logic.




          • Get the average pod memory of the running pods: Execute the script as follows:








          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 21 at 14:37









          Nick Rak

          966211




          966211












          • I think the script is too old. Some errors are occuring. Also In kubernetes documentation they mentioned, memory and cpu is supported by default(kubernetes.io/docs/tasks/run-application/…)
            – JibinNajeeb
            Nov 23 at 5:18


















          • I think the script is too old. Some errors are occuring. Also In kubernetes documentation they mentioned, memory and cpu is supported by default(kubernetes.io/docs/tasks/run-application/…)
            – JibinNajeeb
            Nov 23 at 5:18
















          I think the script is too old. Some errors are occuring. Also In kubernetes documentation they mentioned, memory and cpu is supported by default(kubernetes.io/docs/tasks/run-application/…)
          – JibinNajeeb
          Nov 23 at 5:18




          I think the script is too old. Some errors are occuring. Also In kubernetes documentation they mentioned, memory and cpu is supported by default(kubernetes.io/docs/tasks/run-application/…)
          – JibinNajeeb
          Nov 23 at 5:18


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53407841%2fkubernetes-autoscaling-with-memory-not-working-but-working-for-cpu%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          404 Error Contact Form 7 ajax form submitting

          How to know if a Active Directory user can login interactively

          Refactoring coordinates for Minecraft Pi buildings written in Python