Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource request and limit #640

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Resource request and limit #640

wants to merge 4 commits into from

Conversation

james-otten
Copy link
Collaborator

Set resource requests + limits based on current production usage as seen in el doggo.

Docs: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

Copy link

codecov bot commented Oct 11, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 94.15%. Comparing base (59cc0de) to head (5ae1edc).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #640   +/-   ##
=======================================
  Coverage   94.15%   94.15%           
=======================================
  Files          80       80           
  Lines        3252     3252           
=======================================
  Hits         3062     3062           
  Misses        190      190           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@WillNilges WillNilges left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless you have a very good reason, such as an app is startup-hungry, we should be pinning requests and limits. It makes things more predictable, simpler to debug, and there's no reason not to if we know we have the resources to spare.

Each of these pins should be done one at a time.

(Addendum, this is all based on my own experience. If you have a Datadog graph that proves me wrong, I'd like to see it.)

# requests:
# cpu: 100m
# memory: 128Mi
resources:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should definitely pin Postgres to a core. It deserves it.

Comment on lines 46 to 52
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 200m
memory: 350Mi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pin this guy to 512Mi. 200m is probably fine as a request, as I think most of the CPU is on startup with the tracer.

Comment on lines 61 to 67
resources:
limits:
cpu: 1
memory: 100Mi
requests:
cpu: 250m
memory: 30Mi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This guy definitely doesn't need more than 250mcore and like, 128Mi of RAM. If our web proxy is taking up more than that, we've either got wayyyy more users than we thought we did, or we have a performance problem.

Comment on lines 94 to 100
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 50m
memory: 30Mi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redis really doesn't like not being pinned, especially as an in-memory cache. I'd say pin it to 256Mi and like..... 250mcore. If that doesn't work, bump it.

Comment on lines 109 to 115
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 250m
memory: 250Mi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No fucking idea here. Can probably get away with pinning limit to the smaller request here tho.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

3 participants