Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[scalar-manager] Create the Helm chart for Scalar Manager #254

Merged
merged 12 commits into from
Jun 12, 2024
12 changes: 8 additions & 4 deletions charts/scalar-manager/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,17 @@ type: application
version: 2.0.0-SNAPSHOT
appVersion: 2.0.0-SNAPSHOT
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now we only have 1.0.0-SNAPSHOT images.
I use 2.0.0-SNAPSHOT here because we will bump the SNAPSHOT images version to 2.0.0 after officially release Scalar Manager.

deprecated: false
icon: https://scalar-labs.com/wp-content/themes/scalar/assets/img/logo_scalar.svg
keywords:
- scalardb
- scalardl
- scalar-manager
- scalar-manager
- scalardb-cluster
- scalardl-ledger
- scalardl-auditor
- scalar-admin-for-kubernetes
home: https://scalar-labs.com/
sources:
- https://github.com/scalar-labs/scalar-manager
- https://github.com/scalar-labs/scalar-manager-api
- https://github.com/scalar-labs/scalar-manager-web
maintainers:
- name: Takanori Yokoyama
email: [email protected]
Expand Down
40 changes: 25 additions & 15 deletions charts/scalar-manager/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,28 @@ Current chart version is `2.0.0-SNAPSHOT`

| Key | Type | Default | Description |
|-----|------|---------|-------------|
| fullnameOverride | string | `""` | Override the fully qualified app name |
| image.pullPolicy | string | `"IfNotPresent"` | Specify a imagePullPolicy |
| image.repository | string | `"ghcr.io/scalar-labs/scalar-manager"` | Docker image |
| image.tag | string | `""` | Override the image tag whose default is the chart appVersion |
| imagePullSecrets | list | `[{"name":"reg-docker-secrets"}]` | Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace |
| nameOverride | string | `""` | Override the Chart name |
| replicaCount | int | `1` | number of replicas to deploy |
| scalarManager.grafanaUrl | string | `""` | |
| scalarManager.port | int | `5000` | The port that Scalar Manager container exposes |
| scalarManager.refreshInterval | int | `30` | |
| scalarManager.targets | list | `[]` | |
| service.port | int | `8000` | The port that service exposes |
| service.type | string | `"ClusterIP"` | The service type |
| serviceAccount.automountServiceAccountToken | bool | `true` | Specify to mount a service account token or not |
| serviceAccount.serviceAccountName | string | `""` | Name of the existing service account resource |
| api.applicationProperties | string | The minimum template of application.properties is set by default. | The application.properties for Scalar Manager. If you want to customize application.properties, you can override this value with your application.properties. |
| api.image.pullPolicy | string | `"IfNotPresent"` | |
| api.image.repository | string | `"ghcr.io/scalar-labs/scalar-manager-api"` | |
| api.image.tag | string | `""` | |
| api.resources | object | `{}` | |
| fullnameOverride | string | `""` | |
| imagePullSecrets[0].name | string | `"reg-docker-secrets"` | |
| nameOverride | string | `""` | |
| nodeSelector | object | `{}` | |
| podAnnotations | object | `{}` | |
| podLabels | object | `{}` | |
| podSecurityContext.seccompProfile.type | string | `"RuntimeDefault"` | |
| replicaCount | int | `1` | |
| securityContext.allowPrivilegeEscalation | bool | `false` | |
| securityContext.capabilities.drop[0] | string | `"ALL"` | |
| securityContext.runAsNonRoot | bool | `true` | |
| service.port | int | `80` | |
| service.type | string | `"ClusterIP"` | |
| serviceAccount.automountServiceAccountToken | bool | `true` | |
| serviceAccount.serviceAccountName | string | `""` | |
| tolerations | list | `[]` | |
| web.image.pullPolicy | string | `"IfNotPresent"` | |
| web.image.repository | string | `"ghcr.io/scalar-labs/scalar-manager-web"` | |
| web.image.tag | string | `""` | |
| web.resources | object | `{}` | |
1 change: 1 addition & 0 deletions charts/scalar-manager/templates/_helpers.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ Selector labels
{{- define "scalar-manager.selectorLabels" -}}
app.kubernetes.io/name: {{ include "scalar-manager.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/app: scalar-manager
{{- end }}

{{/*
Expand Down
58 changes: 0 additions & 58 deletions charts/scalar-manager/templates/deployment.yaml

This file was deleted.

9 changes: 0 additions & 9 deletions charts/scalar-manager/templates/role.yaml

This file was deleted.

13 changes: 0 additions & 13 deletions charts/scalar-manager/templates/rolebinding.yaml

This file was deleted.

19 changes: 19 additions & 0 deletions charts/scalar-manager/templates/scalar-manager/clusterrole.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a question. What is the reason why we use ClusterRole instead of Role?

In other words, does Scalar Manager access the following resources that need the ClusterRole?
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#clusterrole-example

  • cluster-scoped resources (like nodes)
  • non-resource endpoints (like /healthz)
  • namespaced resources (like Pods), across all namespaces

metadata:
name: {{ include "scalar-manager.fullname" . }}
labels:
{{- include "scalar-manager.labels" . | nindent 4 }}
rules:
- apiGroups: [""]
resources: ["pods", "services", "namespaces", "configmaps", "secrets", "serviceaccounts"]
verbs: ["get", "list", "create", "patch", "delete", "update"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["get", "list", "create", "delete"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "scalar-manager.fullname" . }}
labels:
{{- include "scalar-manager.labels" . | nindent 4 }}
subjects:
- kind: ServiceAccount
name: {{ include "scalar-manager.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
apiGroup: ""
roleRef:
kind: ClusterRole
name: {{ include "scalar-manager.fullname" . }}
apiGroup: rbac.authorization.k8s.io
10 changes: 10 additions & 0 deletions charts/scalar-manager/templates/scalar-manager/configmap.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a question. I think this ConfigMap is used for storing cluster and pause information. And, it seems that this chart adds create permission for ConfigMap resources.

Do we need to create this ConfigMap on the helm chart side?

I want to confirm whether Scalar Manager API can create ConfigMap on the Scalar Manager side or not.

(If Scalar Manager can create ConfigMap, I want to know the reason why we need to create this ConfigMap on the helm chart side.)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed, we will update Scalar Manager API to create ConfigMap by itself and remove this ConfigMap from the helm chart side in the future release.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed, we noticed that this ConfigMap overwrites some stored metadata (e.g., managed cluster or pause duration info) when we run the helm upgrade command. So, it would be better to remove this ConfigMap from the helm chart side and create it on the Scalar Manager API side.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We address this issue on the Scalar Manager API side.
https://github.com/scalar-labs/scalar-manager-api/pull/90

kind: ConfigMap
metadata:
name: {{ include "scalar-manager.fullname" . }}-api-application-properties
namespace: {{ .Release.Namespace }}
labels:
{{- include "scalar-manager.labels" . | nindent 4 }}
data:
scalar-manager-api-application.properties:
{{- toYaml .Values.api.applicationProperties | nindent 4 }}
74 changes: 74 additions & 0 deletions charts/scalar-manager/templates/scalar-manager/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "scalar-manager.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "scalar-manager.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "scalar-manager.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/scalar-manager/configmap.yaml") . | sha256sum }}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
checksum/config: {{ include (print $.Template.BasePath "/scalar-manager/configmap.yaml") . | sha256sum }}

I think this ConfigMap (/scalar-manager/configmap.yaml) is used for storing cluster or pause information. And, we don't update this ConfigMap by using the helm upgrade command directly (it will be updated by Scalar Manager API).

If the above understanding is correct, this checksum/config annotation doesn't work. And, I think we don't need this annotation because we don't need to restart (re-create) pods when the ConfigMap updates.

So, I think we can remove this annotation.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed, we will remove this annotation.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed in another comment, we decided to use application.properties. So, we will keep this annotation to apply the updated application.properties properly when we run the helm upgrade command.

{{- if .Values.podAnnotations }}
{{- toYaml .Values.podAnnotations | nindent 8 }}
{{- end }}
labels:
{{- include "scalar-manager.selectorLabels" . | nindent 8 }}
{{- if .Values.podLabels }}
{{- toYaml .Values.podLabels | nindent 8 }}
{{- end }}
spec:
restartPolicy: Always
serviceAccountName: {{ include "scalar-manager.serviceAccountName" . }}
automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }}
containers:
- name: {{ .Chart.Name }}-api
image: "{{ .Values.api.image.repository }}:{{ .Values.api.image.tag | default .Chart.AppVersion }}"
resources:
{{- toYaml .Values.api.resources | nindent 12 }}
ports:
- containerPort: 8080
imagePullPolicy: {{ .Values.api.image.pullPolicy }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
volumeMounts:
- name: api-application-properties-volume
mountPath: /app/application.properties
subPath: scalar-manager-api-application.properties
- name: {{ .Chart.Name }}-web
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If my remembering is correct, Scalar Manager works as follows.

+-[Kubernetes Cluster A]---+  +-[Kubernetes Cluster B]---+  +-[Kubernetes Cluster C]---+
|                          |  |                          |  |                          |
|  +--------------------+  |  |  +--------------------+  |  |  +--------------------+  |
|  | Scalar products    |  |  |  | Scalar products    |  |  |  | Scalar products    |  |
|  +--------------------+  |  |  +--------------------+  |  |  +--------------------+  |
|                          |  |                          |  |                          |
|  +--------------------+  |  |  +--------------------+  |  |  +--------------------+  |
|  | Scalar Manager API |  |  |  | Scalar Manager API |  |  |  | Scalar Manager API |  |
|  +--------------------+  |  |  +--------------------+  |  |  +--------------------+  |
|            |             |  |            |             |  |            |             |
+------------+-------------+  +------------+-------------+  +------------+-------------+
             |                             |                             |
             |                             |                             |
             |                             |                             |
             +-----------------------------+-----------------------------+
                                           |
                                           |
                                           |
                                 +---------+----------+
                                 | Scalar Manager Web |
                                 +--------------------+

So, we don't need to deploy Scalar Manager API and Scalar Manager Web in the same one pod.

Vice versa, it would be better to deploy Scalar Manager API and Scalar Manager Web separately.

Is my understanding correct? (Sorry, I might miss some Scalar Manager specifications...)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed, at this time, we must deploy api and web containers in one pod. It's a specification of the current Scalar Manager.

image: "{{ .Values.web.image.repository }}:{{ .Values.web.image.tag | default .Chart.AppVersion }}"
resources:
{{- toYaml .Values.web.resources | nindent 12 }}
ports:
- containerPort: 3000
imagePullPolicy: {{ .Values.web.image.pullPolicy }}
securityContext:
runAsUser: 1000
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question 1

Suggested change
runAsUser: 1000
runAsUser: 1001

It seems that we use 1001 on the Dockerfile side of Scalar Manager API.
https://github.com/scalar-labs/scalar-manager-web/blob/main/Dockerfile#L41

Is this UID 1000 correct? Or, we should use 1001 instead of 1000?

Question 2

It seems that we specify the USER directive on the Dockerfile side, and we set runAsNonRoot: true on the helm chart side as a default value.

In this case, I think the scalar-manager-api container image will run with UID 1001 automatically.

Do we really need to specify runAsUser as a hard-coded value here? If we don't need it, I think it would be better to remove this hard-coded value from here. (For example, I guess this hard-coded value might cause some issues in the OpenShift environment.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kota2and3kan
Thank you for your advice.

1000 was a typo. It should be 1001.

Regarding the question 2.

I did this because I encountered an issue from creating the container with the runAsNonRoot security context. This is the error message from kubectl describe pod <pod-name>.

  Warning  Failed     12s (x7 over 65s)  kubelet            Error: container has runAsNonRoot and image has non-numeric user (nextjs), cannot verify user is non-root (pod: "scalar-manager-57cdcb57d4-pbvvc_scalar-manager(da3940c6-0d80-4287-aa2a-1e9b1f8adc6d)", container: scalar-manager-web)

I am not sure if it's because of the USER command in the Dockerfile here https://github.com/scalar-labs/scalar-manager-web/blob/main/Dockerfile#L54

It specifies the user name but not the user ID.

I tried to change it locally with

USER 1001

like we did for the API
https://github.com/scalar-labs/scalar-manager-api/blob/main/Dockerfile#L22

However, my internet is too slow here to finish building the image so I couldn't really test it.
(the node packages are just a mess ...

Anyway, since it's not a good practice to use runAsUser context, I removed it in 5502ca0

I will create a PR in the web repository to specify the user by user ID instead of the user name and try to test it later with better Internet conditions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR using the numeric user ID instead of the user name is submitted in
https://github.com/scalar-labs/scalar-manager-web/pull/70

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the CI failed because of the removal of runAsUser.
We can test it after we merge https://github.com/scalar-labs/scalar-manager-web/pull/70 (the workflow will create the snapshot image and we can re-run the action here)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@supl
Thank you for the updates!

As I mentioned on the Scalar Manager Web side, according to the code comment of Kubernetes, it seems that we MUST use number (UID) instead of name (string) in the container image if we want to use a non-root user.

{{- toYaml .Values.securityContext | nindent 12 }}
volumes:
- name: api-application-properties-volume
configMap:
name: {{ include "scalar-manager.fullname" . }}-api-application-properties
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
apiVersion: v1
kind: Service
metadata:
namespace: {{ .Release.Namespace }}
name: {{ include "scalar-manager.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "scalar-manager.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.scalarManager.port }}
protocol: TCP
name: http
- protocol: TCP
name: web
port: {{ .Values.service.port }}
targetPort: 3000
selector:
{{- include "scalar-manager.selectorLabels" . | nindent 4 }}
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: {{ .Release.Namespace }}
name: {{ include "scalar-manager.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "scalar-manager.labels" . | nindent 4 }}
{{- end }}
Loading
Loading