Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource request and limit #640

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 35 additions & 15 deletions infra/helm/meshdb/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,13 @@ pg:
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
resources:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should definitely pin Postgres to a core. It deserves it.

limits:
cpu: 1
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
nodeSelector: {}
affinity: {}
tolerations: []
Expand All @@ -47,7 +43,13 @@ meshweb:
tag: main
podSecurityContext: {}
securityContext: {}
resources: {}
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 200m
memory: 350Mi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pin this guy to 512Mi. 200m is probably fine as a request, as I think most of the CPU is on startup with the tracer.

nodeSelector: {}
affinity: {}
tolerations: []
Expand All @@ -56,7 +58,13 @@ nginx:
port: 80
podSecurityContext: {}
securityContext: {}
resources: {}
resources:
limits:
cpu: 1
memory: 100Mi
requests:
cpu: 250m
memory: 30Mi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This guy definitely doesn't need more than 250mcore and like, 128Mi of RAM. If our web proxy is taking up more than that, we've either got wayyyy more users than we thought we did, or we have a performance problem.

nodeSelector: {}
affinity: {}
tolerations: []
Expand All @@ -83,7 +91,13 @@ redis:
port: 6379
podSecurityContext: {}
securityContext: {}
resources: {}
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 50m
memory: 30Mi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redis really doesn't like not being pinned, especially as an in-memory cache. I'd say pin it to 256Mi and like..... 250mcore. If that doesn't work, bump it.

nodeSelector: {}
affinity: {}
tolerations: []
Expand All @@ -92,7 +106,13 @@ pelias:
port: 3000
podSecurityContext: {}
securityContext: {}
resources: {}
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 250m
memory: 250Mi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No fucking idea here. Can probably get away with pinning limit to the smaller request here tho.

nodeSelector: {}
affinity: {}
tolerations: []
Expand Down
Loading