From b5846ec31ee06c391a78f38185d78d7d1d466f7e Mon Sep 17 00:00:00 2001 From: criscola Date: Fri, 23 Sep 2022 14:03:49 +0200 Subject: [PATCH 01/21] Add namespace to TAS Service Account Avoids failures when applying the TAS Service Account through ClusterResourceSet. Relevant error message was: "failed to create object /v1, Kind=ServiceAccount /telemetry-aware-scheduling-service-account: an empty namespace may not be set during creation". Signed-off-by: Cristiano Colangelo --- telemetry-aware-scheduling/deploy/tas-rbac-accounts.yaml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/telemetry-aware-scheduling/deploy/tas-rbac-accounts.yaml b/telemetry-aware-scheduling/deploy/tas-rbac-accounts.yaml index f227756f..6d7b99d6 100644 --- a/telemetry-aware-scheduling/deploy/tas-rbac-accounts.yaml +++ b/telemetry-aware-scheduling/deploy/tas-rbac-accounts.yaml @@ -34,4 +34,5 @@ rules: apiVersion: v1 kind: ServiceAccount metadata: - name: telemetry-aware-scheduling-service-account \ No newline at end of file + name: telemetry-aware-scheduling-service-account + namespace: default \ No newline at end of file From c890226deb65f49f8cc4af90635553c7f08a7ca5 Mon Sep 17 00:00:00 2001 From: criscola Date: Fri, 23 Sep 2022 14:05:05 +0200 Subject: [PATCH 02/21] Add Cluster API deployment method Signed-off-by: Cristiano Colangelo --- .../deploy/cluster-api/README.md | 132 ++++++++++++++++++ .../deploy/cluster-api/cluster-patch.yaml | 5 + .../cluster-api/clusterresourcesets.yaml | 83 +++++++++++ .../kubeadmcontrolplane-patch.yaml | 54 +++++++ 4 files changed, 274 insertions(+) create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/README.md create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/cluster-patch.yaml create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/clusterresourcesets.yaml create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/kubeadmcontrolplane-patch.yaml diff --git a/telemetry-aware-scheduling/deploy/cluster-api/README.md b/telemetry-aware-scheduling/deploy/cluster-api/README.md new file mode 100644 index 00000000..8f0a5dbc --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/README.md @@ -0,0 +1,132 @@ +# Cluster API deployment + +## Introduction + +Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. [Learn more](https://cluster-api.sigs.k8s.io/introduction.html). + +This folder contains an automated and declarative way of deploying the Telemetry Aware Scheduler using Cluster API. We will make use of the [ClusterResourceSet feature](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-resource-set.html) to automatically apply a set of resources. Note you must enable its feature gate before running `clusterctl init` (with `export EXP_CLUSTER_RESOURCE_SET=true`). + +## Requirements + +- A management cluster provisioned in your infrastructure of choice. See [Cluster API Quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html). +- Run Kubernetes v1.22 or greater (tested on Kubernetes v1.25). + +## Provision clusters with TAS installed using Cluster API + +We will provision a cluster with the TAS installed using Cluster API. + +1. In your management cluster, with all your environment variables set to generate cluster definitions, run for example: + +```bash +clusterctl generate cluster scheduling-dev-wkld \ + --kubernetes-version v1.25.0 \ + --control-plane-machine-count=1 \ + --worker-machine-count=3 \ + > your-manifests.yaml +``` + +Be aware that you will need to install a CNI such as Calico before the cluster will be usable. You may automate this +step in the same way as we will see with TAS resources using ClusterResourceSets. + +2. Merge the contents of the resources provided in `cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with +`your-manifests.yaml`. + +If you move `KubeadmControlPlane` in its own file, you can use the convenient `yq` utility: + +> Note that if you are already using patches, `directory: /tmp/kubeadm/patches` must coincide, else the property will be +> overwritten. + +```bash +yq eval-all '. as $item ireduce ({}; . *+ $item)' your-own-kubeadmcontrolplane.yaml kubeadmcontrolplane-patch.yaml > final-kubeadmcontrolplane.yaml +``` + +The new config will: +- Configure TLS certificates for the extender +- Change the `dnsPolicy` of the scheduler to `ClusterFirstWithHostNet` +- Place `KubeSchedulerConfiguration` into control plane nodes and pass the relative CLI flag to the scheduler. + +You will also need to add a label to the `Cluster` resource of your new cluster to allow ClusterResourceSets to target +it (see `cluster-patch.yaml`). Simply add a label `scheduler: tas` in your `Cluster` resource present in `your-manifests.yaml`. + +3. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: + +First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g. +additional metric scraping configurations), then render the charts: + +```bash +helm template ../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml +helm template ../charts/prometheus_helm_chart/ > prometheus.yaml +helm template ../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml +``` + +You need to add namespaces resources, else resource application will fail. Prepend the following to `prometheus.yaml`: + +```bash +kind: Namespace +apiVersion: v1 +metadata: + name: monitoring + labels: + name: monitoring +```` + +Prepend the following to `prometheus-custom-metrics.yaml`: +```bash +kind: Namespace +apiVersion: v1 +metadata: + name: custom-metrics + labels: + name: custom-metrics +``` + +The custom metrics adapter and the TAS deployment require TLS to be configured with a certificate and key. +Information on how to generate correctly signed certs in kubernetes can be found [here](https://github.com/kubernetes-sigs/apiserver-builder-alpha/blob/master/docs/concepts/auth.md). +Files ``serving-ca.crt`` and ``serving-ca.key`` should be in the current working directory. + +Run the following: + +```bash +kubectl -n custom-metrics create secret tls cm-adapter-serving-certs --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > custom-metrics-tls-secret.yaml +kubectl -n default create secret tls extender-secret --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > tas-tls-secret.yaml +``` + +**Attention: Don't commit the TLS certificate and private key to any Git repo as it is considered bad security practice! Makesure to wipe them off your workstation after applying the relative Secrets to your cluster.** + +You also need the TAS manifests (Deployment, Policy CRD and RBAC accounts) and the extender's "configmapgetter" +ClusterRole. We will join the TAS manifests together, so we can have a single ConfigMap for convenience: + +```bash +yq '.' ../tas-*.yaml > tas.yaml +``` + +4. Create and apply the ConfigMaps + +```bash +kubectl create configmap custom-metrics-tls-secret-configmap --from-file=./custom-metrics-tls-secret.yaml -o yaml --dry-run=client > custom-metrics-tls-secret-configmap.yaml +kubectl create configmap custom-metrics-configmap --from-file=./prometheus-custom-metrics.yaml -o yaml --dry-run=client > custom-metrics-configmap.yaml +kubectl create configmap prometheus-configmap --from-file=./prometheus.yaml -o yaml --dry-run=client > prometheus-configmap.yaml +kubectl create configmap prometheus-node-exporter-configmap --from-file=./prometheus-node-exporter.yaml -o yaml --dry-run=client > prometheus-node-exporter-configmap.yaml +kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml +kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml +kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml +``` + +Apply to the management cluster: + +```bash +kubectl apply -f '*-configmap.yaml' +``` + +5. Apply the ClusterResourceSets + +ClusterResourceSets resources are already given to you in `clusterresourcesets.yaml`. +Apply them to the management cluster with `kubectl apply -f clusterresourcesets.yaml` + +6. Apply the cluster manifests + +Finally, you can apply your manifests `kubectl apply -f your-manifests.yaml`. +The Telemetry Aware Scheduler will be running on your new cluster. + +You can test if the scheduler actually works by following this guide: +[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/25a646ece15aaf4c549d8152c4ffbbfc61f8a009/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/cluster-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/cluster-patch.yaml new file mode 100644 index 00000000..eb002606 --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/cluster-patch.yaml @@ -0,0 +1,5 @@ +apiVersion: cluster.x-k8s.io/v1beta1 +kind: Cluster +metadata: + labels: + scheduler: tas \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/clusterresourcesets.yaml b/telemetry-aware-scheduling/deploy/cluster-api/clusterresourcesets.yaml new file mode 100644 index 00000000..986c107c --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/clusterresourcesets.yaml @@ -0,0 +1,83 @@ +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: prometheus +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: prometheus-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: prometheus-node-exporter +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: prometheus-node-exporter-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: custom-metrics +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: custom-metrics-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: custom-metrics-tls-secret +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: custom-metrics-tls-secret-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: tas +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: tas-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: tas-tls-secret +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: tas-tls-secret-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: extender +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: extender-configmap diff --git a/telemetry-aware-scheduling/deploy/cluster-api/kubeadmcontrolplane-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/kubeadmcontrolplane-patch.yaml new file mode 100644 index 00000000..3ce5e944 --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/kubeadmcontrolplane-patch.yaml @@ -0,0 +1,54 @@ +apiVersion: controlplane.cluster.x-k8s.io/v1beta1 +kind: KubeadmControlPlane +spec: + kubeadmConfigSpec: + files: + - path: /etc/kubernetes/schedulerconfig/scheduler-componentconfig.yaml + content: | + apiVersion: kubescheduler.config.k8s.io/v1 + kind: KubeSchedulerConfiguration + clientConnection: + kubeconfig: /etc/kubernetes/scheduler.conf + extenders: + - urlPrefix: "https://tas-service.default.svc.cluster.local:9001" + prioritizeVerb: "scheduler/prioritize" + filterVerb: "scheduler/filter" + weight: 1 + enableHTTPS: true + managedResources: + - name: "telemetry/scheduling" + ignoredByScheduler: true + ignorable: true + tlsConfig: + insecure: false + certFile: "/host/certs/client.crt" + keyFile: "/host/certs/client.key" + - path: /tmp/kubeadm/patches/kube-scheduler+json.json + content: |- + [ + { + "op": "add", + "path": "/spec/dnsPolicy", + "value": "ClusterFirstWithHostNet" + } + ] + clusterConfiguration: + scheduler: + extraArgs: + config: "/etc/kubernetes/schedulerconfig/scheduler-componentconfig.yaml" + extraVolumes: + - hostPath: "/etc/kubernetes/schedulerconfig" + mountPath: "/etc/kubernetes/schedulerconfig" + name: schedulerconfig + - hostPath: "/etc/kubernetes/pki/ca.key" + mountPath: "/host/certs/client.key" + name: cacert + - hostPath: "/etc/kubernetes/pki/ca.crt" + mountPath: "/host/certs/client.crt" + name: clientcert + initConfiguration: + patches: + directory: /tmp/kubeadm/patches + joinConfiguration: + patches: + directory: /tmp/kubeadm/patches \ No newline at end of file From 4d7d6df4e0034afce59a8ad79403d03522b36e92 Mon Sep 17 00:00:00 2001 From: Madalina Lazar Date: Tue, 18 Oct 2022 12:21:38 +0100 Subject: [PATCH 03/21] Adding code_of_conduct and contributing readme file Signed-off-by: Madalina Lazar --- CODE_OF_CONDUCT.md | 131 +++++++++++++++++++++++++++++++++++++++++++++ CONTRIBUTING.md | 63 ++++++++++++++++++++++ 2 files changed, 194 insertions(+) create mode 100644 CODE_OF_CONDUCT.md create mode 100644 CONTRIBUTING.md diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 00000000..58dba18d --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,131 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, caste, color, religion, or sexual +identity and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community. + +## Our Standards + +Examples of behavior that contributes to a positive environment for our +community include: + +* Demonstrating empathy and kindness toward other people +* Being respectful of differing opinions, viewpoints, and experiences +* Giving and gracefully accepting constructive feedback +* Accepting responsibility and apologizing to those affected by our mistakes, + and learning from the experience +* Focusing on what is best not just for us as individuals, but for the overall + community + +Examples of unacceptable behavior include: + +* The use of sexualized language or imagery, and sexual attention or advances of + any kind +* Trolling, insulting or derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or email address, + without their explicit permission +* Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Enforcement Responsibilities + +Community leaders are responsible for clarifying and enforcing our standards of +acceptable behavior and will take appropriate and fair corrective action in +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful. + +Community leaders have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate. + +## Scope + +This Code of Conduct applies within all community spaces, and also applies when +an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, +posting via an official social media account, or acting as an appointed +representative at an online or offline event. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported to the community leaders responsible for enforcement at +CommunityCodeOfConduct AT intel DOT com. +All complaints will be reviewed and investigated promptly and fairly. + +All community leaders are obligated to respect the privacy and security of the +reporter of any incident. + +## Enforcement Guidelines + +Community leaders will follow these Community Impact Guidelines in determining +the consequences for any action they deem in violation of this Code of Conduct: + +### 1. Correction + +**Community Impact**: Use of inappropriate language or other behavior deemed +unprofessional or unwelcome in the community. + +**Consequence**: A private, written warning from community leaders, providing +clarity around the nature of the violation and an explanation of why the +behavior was inappropriate. A public apology may be requested. + +### 2. Warning + +**Community Impact**: A violation through a single incident or series of +actions. + +**Consequence**: A warning with consequences for continued behavior. No +interaction with the people involved, including unsolicited interaction with +those enforcing the Code of Conduct, for a specified period of time. This +includes avoiding interactions in community spaces as well as external channels +like social media. Violating these terms may lead to a temporary or permanent +ban. + +### 3. Temporary Ban + +**Community Impact**: A serious violation of community standards, including +sustained inappropriate behavior. + +**Consequence**: A temporary ban from any sort of interaction or public +communication with the community for a specified period of time. No public or +private interaction with the people involved, including unsolicited interaction +with those enforcing the Code of Conduct, is allowed during this period. +Violating these terms may lead to a permanent ban. + +### 4. Permanent Ban + +**Community Impact**: Demonstrating a pattern of violation of community +standards, including sustained inappropriate behavior, harassment of an +individual, or aggression toward or disparagement of classes of individuals. + +**Consequence**: A permanent ban from any sort of public interaction within the +community. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], +version 2.1, available at +[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. + +Community Impact Guidelines were inspired by +[Mozilla's code of conduct enforcement ladder][Mozilla CoC]. + +For answers to common questions about this code of conduct, see the FAQ at +[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at +[https://www.contributor-covenant.org/translations][translations]. + +[homepage]: https://www.contributor-covenant.org +[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html +[Mozilla CoC]: https://github.com/mozilla/diversity +[FAQ]: https://www.contributor-covenant.org/faq diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 00000000..16c90b35 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,63 @@ +# Contributing + +### License + +*Platform Aware Scheduling* is licensed under the terms in +[LICENSE](LICENSE). By contributing to the project, you agree to the +license and copyright terms therein and release your contribution +under these terms. + +### Sign your work + +Please use the sign-off line at the end of the patch. Your signature +certifies that you wrote the patch or otherwise have the right to +pass it on as an open-source patch. The rules are pretty simple: if +you can certify the below +(from [developercertificate.org](http://developercertificate.org/)): + +``` +Developer Certificate of Origin +Version 1.1 + +Copyright (C) 2004, 2006 The Linux Foundation and its contributors. +660 York Street, Suite 102, +San Francisco, CA 94110 USA + +Everyone is permitted to copy and distribute verbatim copies of this +license document, but changing it is not allowed. + +Developer's Certificate of Origin 1.1 + +By making a contribution to this project, I certify that: + +(a) The contribution was created in whole or in part by me and I + have the right to submit it under the open source license + indicated in the file; or + +(b) The contribution is based upon previous work that, to the best + of my knowledge, is covered under an appropriate open source + license and I have the right under that license to submit that + work with modifications, whether created in whole or in part + by me, under the same open source license (unless I am + permitted to submit under a different license), as indicated + in the file; or + +(c) The contribution was provided directly to me by some other + person who certified (a), (b) or (c) and I have not modified + it. + +(d) I understand and agree that this project and the contribution + are public and that a record of the contribution (including all + personal information I submit with it, including my sign-off) is + maintained indefinitely and may be redistributed consistent with + this project or the open source license(s) involved. +``` + +Then you just add a line to every git commit message: + + Signed-off-by: Joe Smith + +Use your real name (sorry, no pseudonyms or anonymous contributions.) + +If you set your `user.name` and `user.email` git configs, you can sign your +commit automatically with `git commit -s`. From 19026c4c9e785715fbdc56d0e25142764091f6c9 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Tue, 17 Jan 2023 14:51:05 +0100 Subject: [PATCH 04/21] Add Docker CAPI deployment specific guide --- .../deploy/cluster-api/README.md | 123 +------------- .../deploy/cluster-api/docker/capi-docker.md | 160 ++++++++++++++++++ .../{ => docker}/cluster-patch.yaml | 0 .../docker/clusterclass-patch.yaml | 9 + .../kubeadmcontrolplanetemplate-patch.yaml | 56 ++++++ .../deploy/cluster-api/generic/capi.md | 128 ++++++++++++++ .../cluster-api/generic/cluster-patch.yaml | 5 + .../{ => generic}/clusterresourcesets.yaml | 0 .../kubeadmcontrolplane-patch.yaml | 0 9 files changed, 362 insertions(+), 119 deletions(-) create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md rename telemetry-aware-scheduling/deploy/cluster-api/{ => docker}/cluster-patch.yaml (100%) create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/docker/kubeadmcontrolplanetemplate-patch.yaml create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/generic/cluster-patch.yaml rename telemetry-aware-scheduling/deploy/cluster-api/{ => generic}/clusterresourcesets.yaml (100%) rename telemetry-aware-scheduling/deploy/cluster-api/{ => generic}/kubeadmcontrolplane-patch.yaml (100%) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/README.md b/telemetry-aware-scheduling/deploy/cluster-api/README.md index 8f0a5dbc..c9f7d267 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/README.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/README.md @@ -6,127 +6,12 @@ Cluster API is a Kubernetes sub-project focused on providing declarative APIs an This folder contains an automated and declarative way of deploying the Telemetry Aware Scheduler using Cluster API. We will make use of the [ClusterResourceSet feature](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-resource-set.html) to automatically apply a set of resources. Note you must enable its feature gate before running `clusterctl init` (with `export EXP_CLUSTER_RESOURCE_SET=true`). -## Requirements +## Guides -- A management cluster provisioned in your infrastructure of choice. See [Cluster API Quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html). -- Run Kubernetes v1.22 or greater (tested on Kubernetes v1.25). +- [Cluster API deployment - Docker provider (for local testing/development only)](docker/capi-docker.md) +- [Cluster API deployment - Generic provider](generic/capi.md) -## Provision clusters with TAS installed using Cluster API - -We will provision a cluster with the TAS installed using Cluster API. - -1. In your management cluster, with all your environment variables set to generate cluster definitions, run for example: - -```bash -clusterctl generate cluster scheduling-dev-wkld \ - --kubernetes-version v1.25.0 \ - --control-plane-machine-count=1 \ - --worker-machine-count=3 \ - > your-manifests.yaml -``` - -Be aware that you will need to install a CNI such as Calico before the cluster will be usable. You may automate this -step in the same way as we will see with TAS resources using ClusterResourceSets. - -2. Merge the contents of the resources provided in `cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with -`your-manifests.yaml`. - -If you move `KubeadmControlPlane` in its own file, you can use the convenient `yq` utility: - -> Note that if you are already using patches, `directory: /tmp/kubeadm/patches` must coincide, else the property will be -> overwritten. - -```bash -yq eval-all '. as $item ireduce ({}; . *+ $item)' your-own-kubeadmcontrolplane.yaml kubeadmcontrolplane-patch.yaml > final-kubeadmcontrolplane.yaml -``` - -The new config will: -- Configure TLS certificates for the extender -- Change the `dnsPolicy` of the scheduler to `ClusterFirstWithHostNet` -- Place `KubeSchedulerConfiguration` into control plane nodes and pass the relative CLI flag to the scheduler. - -You will also need to add a label to the `Cluster` resource of your new cluster to allow ClusterResourceSets to target -it (see `cluster-patch.yaml`). Simply add a label `scheduler: tas` in your `Cluster` resource present in `your-manifests.yaml`. - -3. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: - -First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g. -additional metric scraping configurations), then render the charts: - -```bash -helm template ../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml -helm template ../charts/prometheus_helm_chart/ > prometheus.yaml -helm template ../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml -``` - -You need to add namespaces resources, else resource application will fail. Prepend the following to `prometheus.yaml`: - -```bash -kind: Namespace -apiVersion: v1 -metadata: - name: monitoring - labels: - name: monitoring -```` - -Prepend the following to `prometheus-custom-metrics.yaml`: -```bash -kind: Namespace -apiVersion: v1 -metadata: - name: custom-metrics - labels: - name: custom-metrics -``` - -The custom metrics adapter and the TAS deployment require TLS to be configured with a certificate and key. -Information on how to generate correctly signed certs in kubernetes can be found [here](https://github.com/kubernetes-sigs/apiserver-builder-alpha/blob/master/docs/concepts/auth.md). -Files ``serving-ca.crt`` and ``serving-ca.key`` should be in the current working directory. - -Run the following: - -```bash -kubectl -n custom-metrics create secret tls cm-adapter-serving-certs --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > custom-metrics-tls-secret.yaml -kubectl -n default create secret tls extender-secret --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > tas-tls-secret.yaml -``` - -**Attention: Don't commit the TLS certificate and private key to any Git repo as it is considered bad security practice! Makesure to wipe them off your workstation after applying the relative Secrets to your cluster.** - -You also need the TAS manifests (Deployment, Policy CRD and RBAC accounts) and the extender's "configmapgetter" -ClusterRole. We will join the TAS manifests together, so we can have a single ConfigMap for convenience: - -```bash -yq '.' ../tas-*.yaml > tas.yaml -``` - -4. Create and apply the ConfigMaps - -```bash -kubectl create configmap custom-metrics-tls-secret-configmap --from-file=./custom-metrics-tls-secret.yaml -o yaml --dry-run=client > custom-metrics-tls-secret-configmap.yaml -kubectl create configmap custom-metrics-configmap --from-file=./prometheus-custom-metrics.yaml -o yaml --dry-run=client > custom-metrics-configmap.yaml -kubectl create configmap prometheus-configmap --from-file=./prometheus.yaml -o yaml --dry-run=client > prometheus-configmap.yaml -kubectl create configmap prometheus-node-exporter-configmap --from-file=./prometheus-node-exporter.yaml -o yaml --dry-run=client > prometheus-node-exporter-configmap.yaml -kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml -kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml -kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml -``` - -Apply to the management cluster: - -```bash -kubectl apply -f '*-configmap.yaml' -``` - -5. Apply the ClusterResourceSets - -ClusterResourceSets resources are already given to you in `clusterresourcesets.yaml`. -Apply them to the management cluster with `kubectl apply -f clusterresourcesets.yaml` - -6. Apply the cluster manifests - -Finally, you can apply your manifests `kubectl apply -f your-manifests.yaml`. -The Telemetry Aware Scheduler will be running on your new cluster. +## Testing You can test if the scheduler actually works by following this guide: [Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/25a646ece15aaf4c549d8152c4ffbbfc61f8a009/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md new file mode 100644 index 00000000..244e172c --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -0,0 +1,160 @@ +# Cluster API deployment - Docker provider (for local testing/development only) + +## Requirements + +- A management cluster provisioned in your infrastructure of choice and the relative tooling. + See [Cluster API Quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html). +- Run Kubernetes v1.22 or greater (tested on Kubernetes v1.25). +- Docker + +## Provision clusters with TAS installed using Cluster API + +We will provision a KinD cluster with the TAS installed using Cluster API. This guide is meant for local testing/development only. + +For the deployment using a generic provider, please refer to [Cluster API deployment - Generic provider](capi.md). + +1. Run the following to set up a KinD cluster for CAPD: + +```bash +cat > kind-cluster-with-extramounts.yaml < capi-quickstart.yaml +``` + +Be aware that you will need to install a CNI such as Calico before the cluster will be usable. You may automate this +step in the same way as we will see with TAS resources using ClusterResourceSets. + +2. Merge the contents of the resources provided in `cluster-patch.yaml`, `kubeadmcontrolplanetemplate-patch.yaml` and `clusterclass-patch.yaml` with + `your-manifests.yaml`. + +The new config will: +- Configure TLS certificates for the extender +- Change the `dnsPolicy` of the scheduler to `ClusterFirstWithHostNet` +- Place `KubeSchedulerConfiguration` into control plane nodes and pass the relative CLI flag to the scheduler. +- Change the behavior of the pre-existing patch application of `/spec/template/spec/kubeadmConfigSpec/files` in `ClusterClass` +such that our new patch is not ignored/overwritten. For some more clarification on this, see [this issue](https://github.com/kubernetes-sigs/cluster-api/pull/7630). + +You will also need to add a label to the `Cluster` resource of your new cluster to allow ClusterResourceSets to target +it (see `cluster-patch.yaml`). Simply add a label `scheduler: tas` in your `Cluster` resource present in `your-manifests.yaml`. + +3. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: + +First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g. +additional metric scraping configurations), then render the charts: + + ```bash + helm template ../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml + helm template ../charts/prometheus_helm_chart/ > prometheus.yaml + helm template ../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml + ``` + +You need to add namespaces resources, else resource application will fail. Prepend the following to `prometheus.yaml`: + + ```bash +kind: Namespace +apiVersion: v1 +metadata: + name: monitoring + labels: + name: monitoring + ```` + +Prepend the following to `prometheus-custom-metrics.yaml`: + ```bash +kind: Namespace +apiVersion: v1 +metadata: + name: custom-metrics + labels: + name: custom-metrics + ``` + +The custom metrics adapter and the TAS deployment require TLS to be configured with a certificate and key. +Information on how to generate correctly signed certs in kubernetes can be found [here](https://github.com/kubernetes-sigs/apiserver-builder-alpha/blob/master/docs/concepts/auth.md). +Files ``serving-ca.crt`` and ``serving-ca.key`` should be in the current working directory. + +Run the following: + + ```bash + kubectl -n custom-metrics create secret tls cm-adapter-serving-certs --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > custom-metrics-tls-secret.yaml + kubectl -n default create secret tls extender-secret --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > tas-tls-secret.yaml + ``` + +**Attention: Don't commit the TLS certificate and private key to any Git repo as it is considered bad security practice! Make sure to wipe them off your workstation after applying the relative Secrets to your cluster.** + +You also need the TAS manifests (Deployment, Policy CRD and RBAC accounts) and the extender's "configmapgetter" +ClusterRole. We will join the TAS manifests together, so we can have a single ConfigMap for convenience: + + ```bash + yq '.' ../tas-*.yaml > tas.yaml + ``` + +4. Create and apply the ConfigMaps + + ```bash + kubectl create configmap custom-metrics-tls-secret-configmap --from-file=./custom-metrics-tls-secret.yaml -o yaml --dry-run=client > custom-metrics-tls-secret-configmap.yaml + kubectl create configmap custom-metrics-configmap --from-file=./prometheus-custom-metrics.yaml -o yaml --dry-run=client > custom-metrics-configmap.yaml + kubectl create configmap prometheus-configmap --from-file=./prometheus.yaml -o yaml --dry-run=client > prometheus-configmap.yaml + kubectl create configmap prometheus-node-exporter-configmap --from-file=./prometheus-node-exporter.yaml -o yaml --dry-run=client > prometheus-node-exporter-configmap.yaml + kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml + kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml + kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml + ``` + +Apply to the management cluster: + + ```bash + kubectl apply -f '*-configmap.yaml' + ``` + +5. Apply the ClusterResourceSets + +ClusterResourceSets resources are already given to you in `clusterresourcesets.yaml`. +Apply them to the management cluster with `kubectl apply -f clusterresourcesets.yaml` + +6. Apply the cluster manifests + +Finally, you can apply your manifests `kubectl apply -f your-manifests.yaml`. +The Telemetry Aware Scheduler will be running on your new cluster. You can connect to the workload cluster by +exporting its kubeconfig: + +```bash +clusterctl get kubeconfig ecoqube-dev > ecoqube-dev.kubeconfig +``` + +Then, specifically for the CAPD docker, point the kubeconfig to the correct address of the HAProxy container: + +```bash +sed -i -e "s/server:.*/server: https:\/\/$(docker port ecoqube-dev-lb 6443/tcp | sed "s/0.0.0.0/127.0.0.1/")/g" ./ecoqube-dev.kubeconfig +``` + +You can test if the scheduler actually works by following this guide: +[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/25a646ece15aaf4c549d8152c4ffbbfc61f8a009/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/cluster-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/docker/cluster-patch.yaml similarity index 100% rename from telemetry-aware-scheduling/deploy/cluster-api/cluster-patch.yaml rename to telemetry-aware-scheduling/deploy/cluster-api/docker/cluster-patch.yaml diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml new file mode 100644 index 00000000..da599c36 --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml @@ -0,0 +1,9 @@ +apiVersion: cluster.x-k8s.io/v1beta1 +kind: ClusterClass +spec: + patches: + - definitions: + - jsonPatches: + - op: add + # Note: we must add a dash - after files, as shown below. Else the patch application in KubeadmControlPlaneTemplate will fail! + path: /spec/template/spec/kubeadmConfigSpec/files/- \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/kubeadmcontrolplanetemplate-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/docker/kubeadmcontrolplanetemplate-patch.yaml new file mode 100644 index 00000000..c8eec953 --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/kubeadmcontrolplanetemplate-patch.yaml @@ -0,0 +1,56 @@ +apiVersion: controlplane.cluster.x-k8s.io/v1beta1 +kind: KubeadmControlPlaneTemplate +spec: + template: + spec: + kubeadmConfigSpec: + clusterConfiguration: + scheduler: + extraArgs: + config: "/etc/kubernetes/schedulerconfig/scheduler-componentconfig.yaml" + extraVolumes: + - hostPath: "/etc/kubernetes/schedulerconfig" + mountPath: "/etc/kubernetes/schedulerconfig" + name: schedulerconfig + - hostPath: "/etc/kubernetes/pki/ca.key" + mountPath: "/host/certs/client.key" + name: cacert + - hostPath: "/etc/kubernetes/pki/ca.crt" + mountPath: "/host/certs/client.crt" + name: clientcert + initConfiguration: + patches: + directory: /etc/tas/patches + joinConfiguration: + patches: + directory: /etc/tas/patches + files: + - path: /etc/kubernetes/schedulerconfig/scheduler-componentconfig.yaml + content: | + apiVersion: kubescheduler.config.k8s.io/v1 + kind: KubeSchedulerConfiguration + clientConnection: + kubeconfig: /etc/kubernetes/scheduler.conf + extenders: + - urlPrefix: "https://tas-service.default.svc.cluster.local:9001" + prioritizeVerb: "scheduler/prioritize" + filterVerb: "scheduler/filter" + weight: 1 + enableHTTPS: true + managedResources: + - name: "telemetry/scheduling" + ignoredByScheduler: true + ignorable: true + tlsConfig: + insecure: false + certFile: "/host/certs/client.crt" + keyFile: "/host/certs/client.key" + - path: /etc/tas/patches/kube-scheduler+json.json + content: |- + [ + { + "op": "add", + "path": "/spec/dnsPolicy", + "value": "ClusterFirstWithHostNet" + } + ] \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md new file mode 100644 index 00000000..be3cde25 --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -0,0 +1,128 @@ +# Cluster API deployment - Generic provider + +## Requirements + +- A management cluster provisioned in your infrastructure of choice and the relative tooling. + See [Cluster API Quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html). +- Run Kubernetes v1.22 or greater (tested on Kubernetes v1.25). + +## Provision clusters with TAS installed using Cluster API + +We will provision a generic cluster with the TAS installed using Cluster API. This was tested using a the GCP provider. + +For the deployment using the Docker provider (local testing/development only), please refer to [Cluster API deployment - Generic provider](capi.md). +1. In your management cluster, with all your environment variables set to generate cluster definitions, run for example: + +```bash +clusterctl generate cluster scheduling-dev-wkld \ + --kubernetes-version v1.25.0 \ + --control-plane-machine-count=1 \ + --worker-machine-count=3 \ + > your-manifests.yaml +``` + +Be aware that you will need to install a CNI such as Calico before the cluster will be usable. You may automate this +step in the same way as we will see with TAS resources using ClusterResourceSets. + +2. Merge the contents of the resources provided in `cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with + `your-manifests.yaml`. + +If you move `KubeadmControlPlane` in its own file, you can use the convenient `yq` utility: + +> Note that if you are already using patches, `directory: /tmp/kubeadm/patches` must coincide, else the property will be +> overwritten. + +```bash +yq eval-all '. as $item ireduce ({}; . *+ $item)' your-own-kubeadmcontrolplane.yaml kubeadmcontrolplane-patch.yaml > final-kubeadmcontrolplane.yaml +``` + +The new config will: +- Configure TLS certificates for the extender +- Change the `dnsPolicy` of the scheduler to `ClusterFirstWithHostNet` +- Place `KubeSchedulerConfiguration` into control plane nodes and pass the relative CLI flag to the scheduler. + +You will also need to add a label to the `Cluster` resource of your new cluster to allow ClusterResourceSets to target +it (see `cluster-patch.yaml`). Simply add a label `scheduler: tas` in your `Cluster` resource present in `your-manifests.yaml`. + +3. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: + +First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g. +additional metric scraping configurations), then render the charts: + +```bash +helm template ../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml +helm template ../charts/prometheus_helm_chart/ > prometheus.yaml +helm template ../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml +``` + +You need to add namespaces resources, else resource application will fail. Prepend the following to `prometheus.yaml`: + +```bash +kind: Namespace +apiVersion: v1 +metadata: + name: monitoring + labels: + name: monitoring +```` + +Prepend the following to `prometheus-custom-metrics.yaml`: +```bash +kind: Namespace +apiVersion: v1 +metadata: + name: custom-metrics + labels: + name: custom-metrics +``` + +The custom metrics adapter and the TAS deployment require TLS to be configured with a certificate and key. +Information on how to generate correctly signed certs in kubernetes can be found [here](https://github.com/kubernetes-sigs/apiserver-builder-alpha/blob/master/docs/concepts/auth.md). +Files ``serving-ca.crt`` and ``serving-ca.key`` should be in the current working directory. + +Run the following: + +```bash +kubectl -n custom-metrics create secret tls cm-adapter-serving-certs --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > custom-metrics-tls-secret.yaml +kubectl -n default create secret tls extender-secret --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > tas-tls-secret.yaml +``` + +**Attention: Don't commit the TLS certificate and private key to any Git repo as it is considered bad security practice! Make sure to wipe them off your workstation after applying the relative Secrets to your cluster.** + +You also need the TAS manifests (Deployment, Policy CRD and RBAC accounts) and the extender's "configmapgetter" +ClusterRole. We will join the TAS manifests together, so we can have a single ConfigMap for convenience: + +```bash +yq '.' ../tas-*.yaml > tas.yaml +``` + +4. Create and apply the ConfigMaps + +```bash +kubectl create configmap custom-metrics-tls-secret-configmap --from-file=./custom-metrics-tls-secret.yaml -o yaml --dry-run=client > custom-metrics-tls-secret-configmap.yaml +kubectl create configmap custom-metrics-configmap --from-file=./prometheus-custom-metrics.yaml -o yaml --dry-run=client > custom-metrics-configmap.yaml +kubectl create configmap prometheus-configmap --from-file=./prometheus.yaml -o yaml --dry-run=client > prometheus-configmap.yaml +kubectl create configmap prometheus-node-exporter-configmap --from-file=./prometheus-node-exporter.yaml -o yaml --dry-run=client > prometheus-node-exporter-configmap.yaml +kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml +kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml +kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml +``` + +Apply to the management cluster: + +```bash +kubectl apply -f '*-configmap.yaml' +``` + +5. Apply the ClusterResourceSets + +ClusterResourceSets resources are already given to you in `clusterresourcesets.yaml`. +Apply them to the management cluster with `kubectl apply -f clusterresourcesets.yaml` + +6. Apply the cluster manifests + +Finally, you can apply your manifests `kubectl apply -f your-manifests.yaml`. +The Telemetry Aware Scheduler will be running on your new cluster. + +You can test if the scheduler actually works by following this guide: +[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/25a646ece15aaf4c549d8152c4ffbbfc61f8a009/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/cluster-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/generic/cluster-patch.yaml new file mode 100644 index 00000000..eb002606 --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/cluster-patch.yaml @@ -0,0 +1,5 @@ +apiVersion: cluster.x-k8s.io/v1beta1 +kind: Cluster +metadata: + labels: + scheduler: tas \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/clusterresourcesets.yaml b/telemetry-aware-scheduling/deploy/cluster-api/generic/clusterresourcesets.yaml similarity index 100% rename from telemetry-aware-scheduling/deploy/cluster-api/clusterresourcesets.yaml rename to telemetry-aware-scheduling/deploy/cluster-api/generic/clusterresourcesets.yaml diff --git a/telemetry-aware-scheduling/deploy/cluster-api/kubeadmcontrolplane-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/generic/kubeadmcontrolplane-patch.yaml similarity index 100% rename from telemetry-aware-scheduling/deploy/cluster-api/kubeadmcontrolplane-patch.yaml rename to telemetry-aware-scheduling/deploy/cluster-api/generic/kubeadmcontrolplane-patch.yaml From f44598a573471a7f1b8e203fc669f46ca37d6e53 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Tue, 17 Jan 2023 14:56:45 +0100 Subject: [PATCH 05/21] Add ClusterResourceSets for CAPD deployment --- .../docker/clusterresourcesets.yaml | 83 +++++++++++++++++++ 1 file changed, 83 insertions(+) create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/docker/clusterresourcesets.yaml diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterresourcesets.yaml b/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterresourcesets.yaml new file mode 100644 index 00000000..986c107c --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterresourcesets.yaml @@ -0,0 +1,83 @@ +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: prometheus +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: prometheus-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: prometheus-node-exporter +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: prometheus-node-exporter-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: custom-metrics +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: custom-metrics-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: custom-metrics-tls-secret +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: custom-metrics-tls-secret-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: tas +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: tas-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: tas-tls-secret +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: tas-tls-secret-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: extender +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: extender-configmap From 6be76481af5f39bb1dc0686f53d7c0462c31ad88 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 11:25:25 +0100 Subject: [PATCH 06/21] Move CRS to 'shared' folder. Add Calico CRS resources for simplicity. Update docs relative to Calico CRS. --- .../deploy/cluster-api/docker/capi-docker.md | 32 +- .../docker/clusterresourcesets.yaml | 12 + .../deploy/cluster-api/generic/capi.md | 10 +- .../generic/clusterresourcesets.yaml | 12 + .../deploy/cluster-api/shared/calico.yaml | 4684 +++++++++++++++++ .../shared/clusterresourcesets.yaml | 95 + 6 files changed, 4824 insertions(+), 21 deletions(-) create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/shared/calico.yaml create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/shared/clusterresourcesets.yaml diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 244e172c..2c4bab7d 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -49,9 +49,6 @@ clusterctl generate cluster capi-quickstart --flavor development \ > capi-quickstart.yaml ``` -Be aware that you will need to install a CNI such as Calico before the cluster will be usable. You may automate this -step in the same way as we will see with TAS resources using ClusterResourceSets. - 2. Merge the contents of the resources provided in `cluster-patch.yaml`, `kubeadmcontrolplanetemplate-patch.yaml` and `clusterclass-patch.yaml` with `your-manifests.yaml`. @@ -119,26 +116,27 @@ ClusterRole. We will join the TAS manifests together, so we can have a single Co 4. Create and apply the ConfigMaps - ```bash - kubectl create configmap custom-metrics-tls-secret-configmap --from-file=./custom-metrics-tls-secret.yaml -o yaml --dry-run=client > custom-metrics-tls-secret-configmap.yaml - kubectl create configmap custom-metrics-configmap --from-file=./prometheus-custom-metrics.yaml -o yaml --dry-run=client > custom-metrics-configmap.yaml - kubectl create configmap prometheus-configmap --from-file=./prometheus.yaml -o yaml --dry-run=client > prometheus-configmap.yaml - kubectl create configmap prometheus-node-exporter-configmap --from-file=./prometheus-node-exporter.yaml -o yaml --dry-run=client > prometheus-node-exporter-configmap.yaml - kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml - kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml - kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml - ``` +```bash +kubectl create configmap custom-metrics-tls-secret-configmap --from-file=./custom-metrics-tls-secret.yaml -o yaml --dry-run=client > custom-metrics-tls-secret-configmap.yaml +kubectl create configmap custom-metrics-configmap --from-file=./prometheus-custom-metrics.yaml -o yaml --dry-run=client > custom-metrics-configmap.yaml +kubectl create configmap prometheus-configmap --from-file=./prometheus.yaml -o yaml --dry-run=client > prometheus-configmap.yaml +kubectl create configmap prometheus-node-exporter-configmap --from-file=./prometheus-node-exporter.yaml -o yaml --dry-run=client > prometheus-node-exporter-configmap.yaml +kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml +kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml +kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml +kubectl create configmap calico-configmap --from-file=../shared/calico-configmap.yaml -o yaml --dry-run=client > calico-configmap.yaml +``` Apply to the management cluster: - ```bash - kubectl apply -f '*-configmap.yaml' - ``` +```bash +kubectl apply -f '*-configmap.yaml' +``` 5. Apply the ClusterResourceSets -ClusterResourceSets resources are already given to you in `clusterresourcesets.yaml`. -Apply them to the management cluster with `kubectl apply -f clusterresourcesets.yaml` +ClusterResourceSets resources are already given to you in `../shared/clusterresourcesets.yaml`. +Apply them to the management cluster with `kubectl apply -f ./shared/clusterresourcesets.yaml` 6. Apply the cluster manifests diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterresourcesets.yaml b/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterresourcesets.yaml index 986c107c..7b39ac6b 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterresourcesets.yaml +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterresourcesets.yaml @@ -81,3 +81,15 @@ spec: resources: - kind: ConfigMap name: extender-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: calico +spec: + clusterSelector: + matchLabels: + cni: calico + resources: + - kind: ConfigMap + name: calico-configmap diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index be3cde25..0ebf1b14 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -21,8 +21,9 @@ clusterctl generate cluster scheduling-dev-wkld \ > your-manifests.yaml ``` -Be aware that you will need to install a CNI such as Calico before the cluster will be usable. You may automate this -step in the same way as we will see with TAS resources using ClusterResourceSets. +Be aware that you will need to install a CNI such as Calico before the cluster will be usable. +Calico works for the great majority of providers, so all configurations have been provided for your convenience, i.e ClusterResourceSet, CRS label in Cluster and CRS ConfigMap). +For more information, see [Deploy a CNI solution](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution) in the CAPI quickstart. 2. Merge the contents of the resources provided in `cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with `your-manifests.yaml`. @@ -106,6 +107,7 @@ kubectl create configmap prometheus-node-exporter-configmap --from-file=./promet kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml +kubectl create configmap calico-configmap --from-file=../shared/calico-configmap.yaml -o yaml --dry-run=client > calico-configmap.yaml ``` Apply to the management cluster: @@ -116,8 +118,8 @@ kubectl apply -f '*-configmap.yaml' 5. Apply the ClusterResourceSets -ClusterResourceSets resources are already given to you in `clusterresourcesets.yaml`. -Apply them to the management cluster with `kubectl apply -f clusterresourcesets.yaml` +ClusterResourceSets resources are already given to you in `../shared/clusterresourcesets.yaml`. +Apply them to the management cluster with `kubectl apply -f ./shared/clusterresourcesets.yaml` 6. Apply the cluster manifests diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/clusterresourcesets.yaml b/telemetry-aware-scheduling/deploy/cluster-api/generic/clusterresourcesets.yaml index 986c107c..7b39ac6b 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/clusterresourcesets.yaml +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/clusterresourcesets.yaml @@ -81,3 +81,15 @@ spec: resources: - kind: ConfigMap name: extender-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: calico +spec: + clusterSelector: + matchLabels: + cni: calico + resources: + - kind: ConfigMap + name: calico-configmap diff --git a/telemetry-aware-scheduling/deploy/cluster-api/shared/calico.yaml b/telemetry-aware-scheduling/deploy/cluster-api/shared/calico.yaml new file mode 100644 index 00000000..4440f3d1 --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/shared/calico.yaml @@ -0,0 +1,4684 @@ +--- +# Source: calico/templates/calico-config.yaml +# This ConfigMap is used to configure a self-hosted Calico installation. +kind: ConfigMap +apiVersion: v1 +metadata: + name: calico-config + namespace: kube-system +data: + # Typha is disabled. + typha_service_name: "none" + # Configure the backend to use. + calico_backend: "bird" + + # Configure the MTU to use for workload interfaces and tunnels. + # By default, MTU is auto-detected, and explicitly setting this field should not be required. + # You can override auto-detection by providing a non-zero value. + veth_mtu: "0" + + # The CNI network configuration to install on each node. The special + # values in this config will be automatically populated. + cni_network_config: |- + { + "name": "k8s-pod-network", + "cniVersion": "0.3.1", + "plugins": [ + { + "type": "calico", + "log_level": "info", + "log_file_path": "/var/log/calico/cni/cni.log", + "datastore_type": "kubernetes", + "nodename": "__KUBERNETES_NODE_NAME__", + "mtu": __CNI_MTU__, + "ipam": { + "type": "calico-ipam" + }, + "policy": { + "type": "k8s" + }, + "kubernetes": { + "kubeconfig": "__KUBECONFIG_FILEPATH__" + } + }, + { + "type": "portmap", + "snat": true, + "capabilities": {"portMappings": true} + }, + { + "type": "bandwidth", + "capabilities": {"bandwidth": true} + } + ] + } + +--- +# Source: calico/templates/kdd-crds.yaml + +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: bgpconfigurations.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: BGPConfiguration + listKind: BGPConfigurationList + plural: bgpconfigurations + singular: bgpconfiguration + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + description: BGPConfiguration contains the configuration for any BGP routing. + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: BGPConfigurationSpec contains the values of the BGP configuration. + properties: + asNumber: + description: 'ASNumber is the default AS number used by a node. [Default: + 64512]' + format: int32 + type: integer + bindMode: + description: BindMode indicates whether to listen for BGP connections + on all addresses (None) or only on the node's canonical IP address + Node.Spec.BGP.IPvXAddress (NodeIP). Default behaviour is to listen + for BGP connections on all addresses. + type: string + communities: + description: Communities is a list of BGP community values and their + arbitrary names for tagging routes. + items: + description: Community contains standard or large community value + and its name. + properties: + name: + description: Name given to community value. + type: string + value: + description: Value must be of format `aa:nn` or `aa:nn:mm`. + For standard community use `aa:nn` format, where `aa` and + `nn` are 16 bit number. For large community use `aa:nn:mm` + format, where `aa`, `nn` and `mm` are 32 bit number. Where, + `aa` is an AS Number, `nn` and `mm` are per-AS identifier. + pattern: ^(\d+):(\d+)$|^(\d+):(\d+):(\d+)$ + type: string + type: object + type: array + listenPort: + description: ListenPort is the port where BGP protocol should listen. + Defaults to 179 + maximum: 65535 + minimum: 1 + type: integer + logSeverityScreen: + description: 'LogSeverityScreen is the log severity above which logs + are sent to the stdout. [Default: INFO]' + type: string + nodeMeshMaxRestartTime: + description: Time to allow for software restart for node-to-mesh peerings. When + specified, this is configured as the graceful restart timeout. When + not specified, the BIRD default of 120s is used. This field can + only be set on the default BGPConfiguration instance and requires + that NodeMesh is enabled + type: string + nodeMeshPassword: + description: Optional BGP password for full node-to-mesh peerings. + This field can only be set on the default BGPConfiguration instance + and requires that NodeMesh is enabled + properties: + secretKeyRef: + description: Selects a key of a secret in the node pod's namespace. + properties: + key: + description: The key of the secret to select from. Must be + a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret or its key must be + defined + type: boolean + required: + - key + type: object + type: object + nodeToNodeMeshEnabled: + description: 'NodeToNodeMeshEnabled sets whether full node to node + BGP mesh is enabled. [Default: true]' + type: boolean + prefixAdvertisements: + description: PrefixAdvertisements contains per-prefix advertisement + configuration. + items: + description: PrefixAdvertisement configures advertisement properties + for the specified CIDR. + properties: + cidr: + description: CIDR for which properties should be advertised. + type: string + communities: + description: Communities can be list of either community names + already defined in `Specs.Communities` or community value + of format `aa:nn` or `aa:nn:mm`. For standard community use + `aa:nn` format, where `aa` and `nn` are 16 bit number. For + large community use `aa:nn:mm` format, where `aa`, `nn` and + `mm` are 32 bit number. Where,`aa` is an AS Number, `nn` and + `mm` are per-AS identifier. + items: + type: string + type: array + type: object + type: array + serviceClusterIPs: + description: ServiceClusterIPs are the CIDR blocks from which service + cluster IPs are allocated. If specified, Calico will advertise these + blocks, as well as any cluster IPs within them. + items: + description: ServiceClusterIPBlock represents a single allowed ClusterIP + CIDR block. + properties: + cidr: + type: string + type: object + type: array + serviceExternalIPs: + description: ServiceExternalIPs are the CIDR blocks for Kubernetes + Service External IPs. Kubernetes Service ExternalIPs will only be + advertised if they are within one of these blocks. + items: + description: ServiceExternalIPBlock represents a single allowed + External IP CIDR block. + properties: + cidr: + type: string + type: object + type: array + serviceLoadBalancerIPs: + description: ServiceLoadBalancerIPs are the CIDR blocks for Kubernetes + Service LoadBalancer IPs. Kubernetes Service status.LoadBalancer.Ingress + IPs will only be advertised if they are within one of these blocks. + items: + description: ServiceLoadBalancerIPBlock represents a single allowed + LoadBalancer IP CIDR block. + properties: + cidr: + type: string + type: object + type: array + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: bgppeers.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: BGPPeer + listKind: BGPPeerList + plural: bgppeers + singular: bgppeer + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: BGPPeerSpec contains the specification for a BGPPeer resource. + properties: + asNumber: + description: The AS Number of the peer. + format: int32 + type: integer + keepOriginalNextHop: + description: Option to keep the original nexthop field when routes + are sent to a BGP Peer. Setting "true" configures the selected BGP + Peers node to use the "next hop keep;" instead of "next hop self;"(default) + in the specific branch of the Node on "bird.cfg". + type: boolean + maxRestartTime: + description: Time to allow for software restart. When specified, + this is configured as the graceful restart timeout. When not specified, + the BIRD default of 120s is used. + type: string + node: + description: The node name identifying the Calico node instance that + is targeted by this peer. If this is not set, and no nodeSelector + is specified, then this BGP peer selects all nodes in the cluster. + type: string + nodeSelector: + description: Selector for the nodes that should have this peering. When + this is set, the Node field must be empty. + type: string + numAllowedLocalASNumbers: + description: Maximum number of local AS numbers that are allowed in + the AS path for received routes. This removes BGP loop prevention + and should only be used if absolutely necesssary. + format: int32 + type: integer + password: + description: Optional BGP password for the peerings generated by this + BGPPeer resource. + properties: + secretKeyRef: + description: Selects a key of a secret in the node pod's namespace. + properties: + key: + description: The key of the secret to select from. Must be + a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret or its key must be + defined + type: boolean + required: + - key + type: object + type: object + peerIP: + description: The IP address of the peer followed by an optional port + number to peer with. If port number is given, format should be `[]:port` + or `:` for IPv4. If optional port number is not set, + and this peer IP and ASNumber belongs to a calico/node with ListenPort + set in BGPConfiguration, then we use that port to peer. + type: string + peerSelector: + description: Selector for the remote nodes to peer with. When this + is set, the PeerIP and ASNumber fields must be empty. For each + peering between the local node and selected remote nodes, we configure + an IPv4 peering if both ends have NodeBGPSpec.IPv4Address specified, + and an IPv6 peering if both ends have NodeBGPSpec.IPv6Address specified. The + remote AS number comes from the remote node's NodeBGPSpec.ASNumber, + or the global default if that is not set. + type: string + sourceAddress: + description: Specifies whether and how to configure a source address + for the peerings generated by this BGPPeer resource. Default value + "UseNodeIP" means to configure the node IP as the source address. "None" + means not to configure a source address. + type: string + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: blockaffinities.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: BlockAffinity + listKind: BlockAffinityList + plural: blockaffinities + singular: blockaffinity + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: BlockAffinitySpec contains the specification for a BlockAffinity + resource. + properties: + cidr: + type: string + deleted: + description: Deleted indicates that this block affinity is being deleted. + This field is a string for compatibility with older releases that + mistakenly treat this field as a string. + type: string + node: + type: string + state: + type: string + required: + - cidr + - deleted + - node + - state + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: (devel) + creationTimestamp: null + name: caliconodestatuses.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: CalicoNodeStatus + listKind: CalicoNodeStatusList + plural: caliconodestatuses + singular: caliconodestatus + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: CalicoNodeStatusSpec contains the specification for a CalicoNodeStatus + resource. + properties: + classes: + description: Classes declares the types of information to monitor + for this calico/node, and allows for selective status reporting + about certain subsets of information. + items: + type: string + type: array + node: + description: The node name identifies the Calico node instance for + node status. + type: string + updatePeriodSeconds: + description: UpdatePeriodSeconds is the period at which CalicoNodeStatus + should be updated. Set to 0 to disable CalicoNodeStatus refresh. + Maximum update period is one day. + format: int32 + type: integer + type: object + status: + description: CalicoNodeStatusStatus defines the observed state of CalicoNodeStatus. + No validation needed for status since it is updated by Calico. + properties: + agent: + description: Agent holds agent status on the node. + properties: + birdV4: + description: BIRDV4 represents the latest observed status of bird4. + properties: + lastBootTime: + description: LastBootTime holds the value of lastBootTime + from bird.ctl output. + type: string + lastReconfigurationTime: + description: LastReconfigurationTime holds the value of lastReconfigTime + from bird.ctl output. + type: string + routerID: + description: Router ID used by bird. + type: string + state: + description: The state of the BGP Daemon. + type: string + version: + description: Version of the BGP daemon + type: string + type: object + birdV6: + description: BIRDV6 represents the latest observed status of bird6. + properties: + lastBootTime: + description: LastBootTime holds the value of lastBootTime + from bird.ctl output. + type: string + lastReconfigurationTime: + description: LastReconfigurationTime holds the value of lastReconfigTime + from bird.ctl output. + type: string + routerID: + description: Router ID used by bird. + type: string + state: + description: The state of the BGP Daemon. + type: string + version: + description: Version of the BGP daemon + type: string + type: object + type: object + bgp: + description: BGP holds node BGP status. + properties: + numberEstablishedV4: + description: The total number of IPv4 established bgp sessions. + type: integer + numberEstablishedV6: + description: The total number of IPv6 established bgp sessions. + type: integer + numberNotEstablishedV4: + description: The total number of IPv4 non-established bgp sessions. + type: integer + numberNotEstablishedV6: + description: The total number of IPv6 non-established bgp sessions. + type: integer + peersV4: + description: PeersV4 represents IPv4 BGP peers status on the node. + items: + description: CalicoNodePeer contains the status of BGP peers + on the node. + properties: + peerIP: + description: IP address of the peer whose condition we are + reporting. + type: string + since: + description: Since the state or reason last changed. + type: string + state: + description: State is the BGP session state. + type: string + type: + description: Type indicates whether this peer is configured + via the node-to-node mesh, or via en explicit global or + per-node BGPPeer object. + type: string + type: object + type: array + peersV6: + description: PeersV6 represents IPv6 BGP peers status on the node. + items: + description: CalicoNodePeer contains the status of BGP peers + on the node. + properties: + peerIP: + description: IP address of the peer whose condition we are + reporting. + type: string + since: + description: Since the state or reason last changed. + type: string + state: + description: State is the BGP session state. + type: string + type: + description: Type indicates whether this peer is configured + via the node-to-node mesh, or via en explicit global or + per-node BGPPeer object. + type: string + type: object + type: array + required: + - numberEstablishedV4 + - numberEstablishedV6 + - numberNotEstablishedV4 + - numberNotEstablishedV6 + type: object + lastUpdated: + description: LastUpdated is a timestamp representing the server time + when CalicoNodeStatus object last updated. It is represented in + RFC3339 form and is in UTC. + format: date-time + nullable: true + type: string + routes: + description: Routes reports routes known to the Calico BGP daemon + on the node. + properties: + routesV4: + description: RoutesV4 represents IPv4 routes on the node. + items: + description: CalicoNodeRoute contains the status of BGP routes + on the node. + properties: + destination: + description: Destination of the route. + type: string + gateway: + description: Gateway for the destination. + type: string + interface: + description: Interface for the destination + type: string + learnedFrom: + description: LearnedFrom contains information regarding + where this route originated. + properties: + peerIP: + description: If sourceType is NodeMesh or BGPPeer, IP + address of the router that sent us this route. + type: string + sourceType: + description: Type of the source where a route is learned + from. + type: string + type: object + type: + description: Type indicates if the route is being used for + forwarding or not. + type: string + type: object + type: array + routesV6: + description: RoutesV6 represents IPv6 routes on the node. + items: + description: CalicoNodeRoute contains the status of BGP routes + on the node. + properties: + destination: + description: Destination of the route. + type: string + gateway: + description: Gateway for the destination. + type: string + interface: + description: Interface for the destination + type: string + learnedFrom: + description: LearnedFrom contains information regarding + where this route originated. + properties: + peerIP: + description: If sourceType is NodeMesh or BGPPeer, IP + address of the router that sent us this route. + type: string + sourceType: + description: Type of the source where a route is learned + from. + type: string + type: object + type: + description: Type indicates if the route is being used for + forwarding or not. + type: string + type: object + type: array + type: object + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: clusterinformations.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: ClusterInformation + listKind: ClusterInformationList + plural: clusterinformations + singular: clusterinformation + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + description: ClusterInformation contains the cluster specific information. + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: ClusterInformationSpec contains the values of describing + the cluster. + properties: + calicoVersion: + description: CalicoVersion is the version of Calico that the cluster + is running + type: string + clusterGUID: + description: ClusterGUID is the GUID of the cluster + type: string + clusterType: + description: ClusterType describes the type of the cluster + type: string + datastoreReady: + description: DatastoreReady is used during significant datastore migrations + to signal to components such as Felix that it should wait before + accessing the datastore. + type: boolean + variant: + description: Variant declares which variant of Calico should be active. + type: string + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: felixconfigurations.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: FelixConfiguration + listKind: FelixConfigurationList + plural: felixconfigurations + singular: felixconfiguration + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + description: Felix Configuration contains the configuration for Felix. + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: FelixConfigurationSpec contains the values of the Felix configuration. + properties: + allowIPIPPacketsFromWorkloads: + description: 'AllowIPIPPacketsFromWorkloads controls whether Felix + will add a rule to drop IPIP encapsulated traffic from workloads + [Default: false]' + type: boolean + allowVXLANPacketsFromWorkloads: + description: 'AllowVXLANPacketsFromWorkloads controls whether Felix + will add a rule to drop VXLAN encapsulated traffic from workloads + [Default: false]' + type: boolean + awsSrcDstCheck: + description: 'Set source-destination-check on AWS EC2 instances. Accepted + value must be one of "DoNothing", "Enable" or "Disable". [Default: + DoNothing]' + enum: + - DoNothing + - Enable + - Disable + type: string + bpfConnectTimeLoadBalancingEnabled: + description: 'BPFConnectTimeLoadBalancingEnabled when in BPF mode, + controls whether Felix installs the connection-time load balancer. The + connect-time load balancer is required for the host to be able to + reach Kubernetes services and it improves the performance of pod-to-service + connections. The only reason to disable it is for debugging purposes. [Default: + true]' + type: boolean + bpfDataIfacePattern: + description: BPFDataIfacePattern is a regular expression that controls + which interfaces Felix should attach BPF programs to in order to + catch traffic to/from the network. This needs to match the interfaces + that Calico workload traffic flows over as well as any interfaces + that handle incoming traffic to nodeports and services from outside + the cluster. It should not match the workload interfaces (usually + named cali...). + type: string + bpfDisableUnprivileged: + description: 'BPFDisableUnprivileged, if enabled, Felix sets the kernel.unprivileged_bpf_disabled + sysctl to disable unprivileged use of BPF. This ensures that unprivileged + users cannot access Calico''s BPF maps and cannot insert their own + BPF programs to interfere with Calico''s. [Default: true]' + type: boolean + bpfEnabled: + description: 'BPFEnabled, if enabled Felix will use the BPF dataplane. + [Default: false]' + type: boolean + bpfEnforceRPF: + description: 'BPFEnforceRPF enforce strict RPF on all interfaces with + BPF programs regardless of what is the per-interfaces or global + setting. Possible values are Disabled or Strict. [Default: Strict]' + type: string + bpfExtToServiceConnmark: + description: 'BPFExtToServiceConnmark in BPF mode, control a 32bit + mark that is set on connections from an external client to a local + service. This mark allows us to control how packets of that connection + are routed within the host and how is routing intepreted by RPF + check. [Default: 0]' + type: integer + bpfExternalServiceMode: + description: 'BPFExternalServiceMode in BPF mode, controls how connections + from outside the cluster to services (node ports and cluster IPs) + are forwarded to remote workloads. If set to "Tunnel" then both + request and response traffic is tunneled to the remote node. If + set to "DSR", the request traffic is tunneled but the response traffic + is sent directly from the remote node. In "DSR" mode, the remote + node appears to use the IP of the ingress node; this requires a + permissive L2 network. [Default: Tunnel]' + type: string + bpfKubeProxyEndpointSlicesEnabled: + description: BPFKubeProxyEndpointSlicesEnabled in BPF mode, controls + whether Felix's embedded kube-proxy accepts EndpointSlices or not. + type: boolean + bpfKubeProxyIptablesCleanupEnabled: + description: 'BPFKubeProxyIptablesCleanupEnabled, if enabled in BPF + mode, Felix will proactively clean up the upstream Kubernetes kube-proxy''s + iptables chains. Should only be enabled if kube-proxy is not running. [Default: + true]' + type: boolean + bpfKubeProxyMinSyncPeriod: + description: 'BPFKubeProxyMinSyncPeriod, in BPF mode, controls the + minimum time between updates to the dataplane for Felix''s embedded + kube-proxy. Lower values give reduced set-up latency. Higher values + reduce Felix CPU usage by batching up more work. [Default: 1s]' + type: string + bpfLogLevel: + description: 'BPFLogLevel controls the log level of the BPF programs + when in BPF dataplane mode. One of "Off", "Info", or "Debug". The + logs are emitted to the BPF trace pipe, accessible with the command + `tc exec bpf debug`. [Default: Off].' + type: string + bpfMapSizeConntrack: + description: 'BPFMapSizeConntrack sets the size for the conntrack + map. This map must be large enough to hold an entry for each active + connection. Warning: changing the size of the conntrack map can + cause disruption.' + type: integer + bpfMapSizeIPSets: + description: BPFMapSizeIPSets sets the size for ipsets map. The IP + sets map must be large enough to hold an entry for each endpoint + matched by every selector in the source/destination matches in network + policy. Selectors such as "all()" can result in large numbers of + entries (one entry per endpoint in that case). + type: integer + bpfMapSizeNATAffinity: + type: integer + bpfMapSizeNATBackend: + description: BPFMapSizeNATBackend sets the size for nat back end map. + This is the total number of endpoints. This is mostly more than + the size of the number of services. + type: integer + bpfMapSizeNATFrontend: + description: BPFMapSizeNATFrontend sets the size for nat front end + map. FrontendMap should be large enough to hold an entry for each + nodeport, external IP and each port in each service. + type: integer + bpfMapSizeRoute: + description: BPFMapSizeRoute sets the size for the routes map. The + routes map should be large enough to hold one entry per workload + and a handful of entries per host (enough to cover its own IPs and + tunnel IPs). + type: integer + bpfPSNATPorts: + anyOf: + - type: integer + - type: string + description: 'BPFPSNATPorts sets the range from which we randomly + pick a port if there is a source port collision. This should be + within the ephemeral range as defined by RFC 6056 (1024–65535) and + preferably outside the ephemeral ranges used by common operating + systems. Linux uses 32768–60999, while others mostly use the IANA + defined range 49152–65535. It is not necessarily a problem if this + range overlaps with the operating systems. Both ends of the range + are inclusive. [Default: 20000:29999]' + pattern: ^.* + x-kubernetes-int-or-string: true + chainInsertMode: + description: 'ChainInsertMode controls whether Felix hooks the kernel''s + top-level iptables chains by inserting a rule at the top of the + chain or by appending a rule at the bottom. insert is the safe default + since it prevents Calico''s rules from being bypassed. If you switch + to append mode, be sure that the other rules in the chains signal + acceptance by falling through to the Calico rules, otherwise the + Calico policy will be bypassed. [Default: insert]' + type: string + dataplaneDriver: + description: DataplaneDriver filename of the external dataplane driver + to use. Only used if UseInternalDataplaneDriver is set to false. + type: string + dataplaneWatchdogTimeout: + description: 'DataplaneWatchdogTimeout is the readiness/liveness timeout + used for Felix''s (internal) dataplane driver. Increase this value + if you experience spurious non-ready or non-live events when Felix + is under heavy load. Decrease the value to get felix to report non-live + or non-ready more quickly. [Default: 90s]' + type: string + debugDisableLogDropping: + type: boolean + debugMemoryProfilePath: + type: string + debugSimulateCalcGraphHangAfter: + type: string + debugSimulateDataplaneHangAfter: + type: string + defaultEndpointToHostAction: + description: 'DefaultEndpointToHostAction controls what happens to + traffic that goes from a workload endpoint to the host itself (after + the traffic hits the endpoint egress policy). By default Calico + blocks traffic from workload endpoints to the host itself with an + iptables "DROP" action. If you want to allow some or all traffic + from endpoint to host, set this parameter to RETURN or ACCEPT. Use + RETURN if you have your own rules in the iptables "INPUT" chain; + Calico will insert its rules at the top of that chain, then "RETURN" + packets to the "INPUT" chain once it has completed processing workload + endpoint egress policy. Use ACCEPT to unconditionally accept packets + from workloads after processing workload endpoint egress policy. + [Default: Drop]' + type: string + deviceRouteProtocol: + description: This defines the route protocol added to programmed device + routes, by default this will be RTPROT_BOOT when left blank. + type: integer + deviceRouteSourceAddress: + description: This is the IPv4 source address to use on programmed + device routes. By default the source address is left blank, leaving + the kernel to choose the source address used. + type: string + deviceRouteSourceAddressIPv6: + description: This is the IPv6 source address to use on programmed + device routes. By default the source address is left blank, leaving + the kernel to choose the source address used. + type: string + disableConntrackInvalidCheck: + type: boolean + endpointReportingDelay: + type: string + endpointReportingEnabled: + type: boolean + externalNodesList: + description: ExternalNodesCIDRList is a list of CIDR's of external-non-calico-nodes + which may source tunnel traffic and have the tunneled traffic be + accepted at calico nodes. + items: + type: string + type: array + failsafeInboundHostPorts: + description: 'FailsafeInboundHostPorts is a list of UDP/TCP ports + and CIDRs that Felix will allow incoming traffic to host endpoints + on irrespective of the security policy. This is useful to avoid + accidentally cutting off a host with incorrect configuration. For + back-compatibility, if the protocol is not specified, it defaults + to "tcp". If a CIDR is not specified, it will allow traffic from + all addresses. To disable all inbound host ports, use the value + none. The default value allows ssh access and DHCP. [Default: tcp:22, + udp:68, tcp:179, tcp:2379, tcp:2380, tcp:6443, tcp:6666, tcp:6667]' + items: + description: ProtoPort is combination of protocol, port, and CIDR. + Protocol and port must be specified. + properties: + net: + type: string + port: + type: integer + protocol: + type: string + required: + - port + - protocol + type: object + type: array + failsafeOutboundHostPorts: + description: 'FailsafeOutboundHostPorts is a list of UDP/TCP ports + and CIDRs that Felix will allow outgoing traffic from host endpoints + to irrespective of the security policy. This is useful to avoid + accidentally cutting off a host with incorrect configuration. For + back-compatibility, if the protocol is not specified, it defaults + to "tcp". If a CIDR is not specified, it will allow traffic from + all addresses. To disable all outbound host ports, use the value + none. The default value opens etcd''s standard ports to ensure that + Felix does not get cut off from etcd as well as allowing DHCP and + DNS. [Default: tcp:179, tcp:2379, tcp:2380, tcp:6443, tcp:6666, + tcp:6667, udp:53, udp:67]' + items: + description: ProtoPort is combination of protocol, port, and CIDR. + Protocol and port must be specified. + properties: + net: + type: string + port: + type: integer + protocol: + type: string + required: + - port + - protocol + type: object + type: array + featureDetectOverride: + description: FeatureDetectOverride is used to override the feature + detection. Values are specified in a comma separated list with no + spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=". + "true" or "false" will force the feature, empty or omitted values + are auto-detected. + type: string + floatingIPs: + description: FloatingIPs configures whether or not Felix will program + floating IP addresses. + enum: + - Enabled + - Disabled + type: string + genericXDPEnabled: + description: 'GenericXDPEnabled enables Generic XDP so network cards + that don''t support XDP offload or driver modes can use XDP. This + is not recommended since it doesn''t provide better performance + than iptables. [Default: false]' + type: boolean + healthEnabled: + type: boolean + healthHost: + type: string + healthPort: + type: integer + interfaceExclude: + description: 'InterfaceExclude is a comma-separated list of interfaces + that Felix should exclude when monitoring for host endpoints. The + default value ensures that Felix ignores Kubernetes'' IPVS dummy + interface, which is used internally by kube-proxy. If you want to + exclude multiple interface names using a single value, the list + supports regular expressions. For regular expressions you must wrap + the value with ''/''. For example having values ''/^kube/,veth1'' + will exclude all interfaces that begin with ''kube'' and also the + interface ''veth1''. [Default: kube-ipvs0]' + type: string + interfacePrefix: + description: 'InterfacePrefix is the interface name prefix that identifies + workload endpoints and so distinguishes them from host endpoint + interfaces. Note: in environments other than bare metal, the orchestrators + configure this appropriately. For example our Kubernetes and Docker + integrations set the ''cali'' value, and our OpenStack integration + sets the ''tap'' value. [Default: cali]' + type: string + interfaceRefreshInterval: + description: InterfaceRefreshInterval is the period at which Felix + rescans local interfaces to verify their state. The rescan can be + disabled by setting the interval to 0. + type: string + ipipEnabled: + description: 'IPIPEnabled overrides whether Felix should configure + an IPIP interface on the host. Optional as Felix determines this + based on the existing IP pools. [Default: nil (unset)]' + type: boolean + ipipMTU: + description: 'IPIPMTU is the MTU to set on the tunnel device. See + Configuring MTU [Default: 1440]' + type: integer + ipsetsRefreshInterval: + description: 'IpsetsRefreshInterval is the period at which Felix re-checks + all iptables state to ensure that no other process has accidentally + broken Calico''s rules. Set to 0 to disable iptables refresh. [Default: + 90s]' + type: string + iptablesBackend: + description: IptablesBackend specifies which backend of iptables will + be used. The default is legacy. + type: string + iptablesFilterAllowAction: + type: string + iptablesLockFilePath: + description: 'IptablesLockFilePath is the location of the iptables + lock file. You may need to change this if the lock file is not in + its standard location (for example if you have mapped it into Felix''s + container at a different path). [Default: /run/xtables.lock]' + type: string + iptablesLockProbeInterval: + description: 'IptablesLockProbeInterval is the time that Felix will + wait between attempts to acquire the iptables lock if it is not + available. Lower values make Felix more responsive when the lock + is contended, but use more CPU. [Default: 50ms]' + type: string + iptablesLockTimeout: + description: 'IptablesLockTimeout is the time that Felix will wait + for the iptables lock, or 0, to disable. To use this feature, Felix + must share the iptables lock file with all other processes that + also take the lock. When running Felix inside a container, this + requires the /run directory of the host to be mounted into the calico/node + or calico/felix container. [Default: 0s disabled]' + type: string + iptablesMangleAllowAction: + type: string + iptablesMarkMask: + description: 'IptablesMarkMask is the mask that Felix selects its + IPTables Mark bits from. Should be a 32 bit hexadecimal number with + at least 8 bits set, none of which clash with any other mark bits + in use on the system. [Default: 0xff000000]' + format: int32 + type: integer + iptablesNATOutgoingInterfaceFilter: + type: string + iptablesPostWriteCheckInterval: + description: 'IptablesPostWriteCheckInterval is the period after Felix + has done a write to the dataplane that it schedules an extra read + back in order to check the write was not clobbered by another process. + This should only occur if another application on the system doesn''t + respect the iptables lock. [Default: 1s]' + type: string + iptablesRefreshInterval: + description: 'IptablesRefreshInterval is the period at which Felix + re-checks the IP sets in the dataplane to ensure that no other process + has accidentally broken Calico''s rules. Set to 0 to disable IP + sets refresh. Note: the default for this value is lower than the + other refresh intervals as a workaround for a Linux kernel bug that + was fixed in kernel version 4.11. If you are using v4.11 or greater + you may want to set this to, a higher value to reduce Felix CPU + usage. [Default: 10s]' + type: string + ipv6Support: + description: IPv6Support controls whether Felix enables support for + IPv6 (if supported by the in-use dataplane). + type: boolean + kubeNodePortRanges: + description: 'KubeNodePortRanges holds list of port ranges used for + service node ports. Only used if felix detects kube-proxy running + in ipvs mode. Felix uses these ranges to separate host and workload + traffic. [Default: 30000:32767].' + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + logDebugFilenameRegex: + description: LogDebugFilenameRegex controls which source code files + have their Debug log output included in the logs. Only logs from + files with names that match the given regular expression are included. The + filter only applies to Debug level logs. + type: string + logFilePath: + description: 'LogFilePath is the full path to the Felix log. Set to + none to disable file logging. [Default: /var/log/calico/felix.log]' + type: string + logPrefix: + description: 'LogPrefix is the log prefix that Felix uses when rendering + LOG rules. [Default: calico-packet]' + type: string + logSeverityFile: + description: 'LogSeverityFile is the log severity above which logs + are sent to the log file. [Default: Info]' + type: string + logSeverityScreen: + description: 'LogSeverityScreen is the log severity above which logs + are sent to the stdout. [Default: Info]' + type: string + logSeveritySys: + description: 'LogSeveritySys is the log severity above which logs + are sent to the syslog. Set to None for no logging to syslog. [Default: + Info]' + type: string + maxIpsetSize: + type: integer + metadataAddr: + description: 'MetadataAddr is the IP address or domain name of the + server that can answer VM queries for cloud-init metadata. In OpenStack, + this corresponds to the machine running nova-api (or in Ubuntu, + nova-api-metadata). A value of none (case insensitive) means that + Felix should not set up any NAT rule for the metadata path. [Default: + 127.0.0.1]' + type: string + metadataPort: + description: 'MetadataPort is the port of the metadata server. This, + combined with global.MetadataAddr (if not ''None''), is used to + set up a NAT rule, from 169.254.169.254:80 to MetadataAddr:MetadataPort. + In most cases this should not need to be changed [Default: 8775].' + type: integer + mtuIfacePattern: + description: MTUIfacePattern is a regular expression that controls + which interfaces Felix should scan in order to calculate the host's + MTU. This should not match workload interfaces (usually named cali...). + type: string + natOutgoingAddress: + description: NATOutgoingAddress specifies an address to use when performing + source NAT for traffic in a natOutgoing pool that is leaving the + network. By default the address used is an address on the interface + the traffic is leaving on (ie it uses the iptables MASQUERADE target) + type: string + natPortRange: + anyOf: + - type: integer + - type: string + description: NATPortRange specifies the range of ports that is used + for port mapping when doing outgoing NAT. When unset the default + behavior of the network stack is used. + pattern: ^.* + x-kubernetes-int-or-string: true + netlinkTimeout: + type: string + openstackRegion: + description: 'OpenstackRegion is the name of the region that a particular + Felix belongs to. In a multi-region Calico/OpenStack deployment, + this must be configured somehow for each Felix (here in the datamodel, + or in felix.cfg or the environment on each compute node), and must + match the [calico] openstack_region value configured in neutron.conf + on each node. [Default: Empty]' + type: string + policySyncPathPrefix: + description: 'PolicySyncPathPrefix is used to by Felix to communicate + policy changes to external services, like Application layer policy. + [Default: Empty]' + type: string + prometheusGoMetricsEnabled: + description: 'PrometheusGoMetricsEnabled disables Go runtime metrics + collection, which the Prometheus client does by default, when set + to false. This reduces the number of metrics reported, reducing + Prometheus load. [Default: true]' + type: boolean + prometheusMetricsEnabled: + description: 'PrometheusMetricsEnabled enables the Prometheus metrics + server in Felix if set to true. [Default: false]' + type: boolean + prometheusMetricsHost: + description: 'PrometheusMetricsHost is the host that the Prometheus + metrics server should bind to. [Default: empty]' + type: string + prometheusMetricsPort: + description: 'PrometheusMetricsPort is the TCP port that the Prometheus + metrics server should bind to. [Default: 9091]' + type: integer + prometheusProcessMetricsEnabled: + description: 'PrometheusProcessMetricsEnabled disables process metrics + collection, which the Prometheus client does by default, when set + to false. This reduces the number of metrics reported, reducing + Prometheus load. [Default: true]' + type: boolean + prometheusWireGuardMetricsEnabled: + description: 'PrometheusWireGuardMetricsEnabled disables wireguard + metrics collection, which the Prometheus client does by default, + when set to false. This reduces the number of metrics reported, + reducing Prometheus load. [Default: true]' + type: boolean + removeExternalRoutes: + description: Whether or not to remove device routes that have not + been programmed by Felix. Disabling this will allow external applications + to also add device routes. This is enabled by default which means + we will remove externally added routes. + type: boolean + reportingInterval: + description: 'ReportingInterval is the interval at which Felix reports + its status into the datastore or 0 to disable. Must be non-zero + in OpenStack deployments. [Default: 30s]' + type: string + reportingTTL: + description: 'ReportingTTL is the time-to-live setting for process-wide + status reports. [Default: 90s]' + type: string + routeRefreshInterval: + description: 'RouteRefreshInterval is the period at which Felix re-checks + the routes in the dataplane to ensure that no other process has + accidentally broken Calico''s rules. Set to 0 to disable route refresh. + [Default: 90s]' + type: string + routeSource: + description: 'RouteSource configures where Felix gets its routing + information. - WorkloadIPs: use workload endpoints to construct + routes. - CalicoIPAM: the default - use IPAM data to construct routes.' + type: string + routeTableRange: + description: Deprecated in favor of RouteTableRanges. Calico programs + additional Linux route tables for various purposes. RouteTableRange + specifies the indices of the route tables that Calico should use. + properties: + max: + type: integer + min: + type: integer + required: + - max + - min + type: object + routeTableRanges: + description: Calico programs additional Linux route tables for various + purposes. RouteTableRanges specifies a set of table index ranges + that Calico should use. Deprecates`RouteTableRange`, overrides `RouteTableRange`. + items: + properties: + max: + type: integer + min: + type: integer + required: + - max + - min + type: object + type: array + serviceLoopPrevention: + description: 'When service IP advertisement is enabled, prevent routing + loops to service IPs that are not in use, by dropping or rejecting + packets that do not get DNAT''d by kube-proxy. Unless set to "Disabled", + in which case such routing loops continue to be allowed. [Default: + Drop]' + type: string + sidecarAccelerationEnabled: + description: 'SidecarAccelerationEnabled enables experimental sidecar + acceleration [Default: false]' + type: boolean + usageReportingEnabled: + description: 'UsageReportingEnabled reports anonymous Calico version + number and cluster size to projectcalico.org. Logs warnings returned + by the usage server. For example, if a significant security vulnerability + has been discovered in the version of Calico being used. [Default: + true]' + type: boolean + usageReportingInitialDelay: + description: 'UsageReportingInitialDelay controls the minimum delay + before Felix makes a report. [Default: 300s]' + type: string + usageReportingInterval: + description: 'UsageReportingInterval controls the interval at which + Felix makes reports. [Default: 86400s]' + type: string + useInternalDataplaneDriver: + description: UseInternalDataplaneDriver, if true, Felix will use its + internal dataplane programming logic. If false, it will launch + an external dataplane driver and communicate with it over protobuf. + type: boolean + vxlanEnabled: + description: 'VXLANEnabled overrides whether Felix should create the + VXLAN tunnel device for VXLAN networking. Optional as Felix determines + this based on the existing IP pools. [Default: nil (unset)]' + type: boolean + vxlanMTU: + description: 'VXLANMTU is the MTU to set on the IPv4 VXLAN tunnel + device. See Configuring MTU [Default: 1410]' + type: integer + vxlanMTUV6: + description: 'VXLANMTUV6 is the MTU to set on the IPv6 VXLAN tunnel + device. See Configuring MTU [Default: 1390]' + type: integer + vxlanPort: + type: integer + vxlanVNI: + type: integer + wireguardEnabled: + description: 'WireguardEnabled controls whether Wireguard is enabled. + [Default: false]' + type: boolean + wireguardHostEncryptionEnabled: + description: 'WireguardHostEncryptionEnabled controls whether Wireguard + host-to-host encryption is enabled. [Default: false]' + type: boolean + wireguardInterfaceName: + description: 'WireguardInterfaceName specifies the name to use for + the Wireguard interface. [Default: wg.calico]' + type: string + wireguardKeepAlive: + description: 'WireguardKeepAlive controls Wireguard PersistentKeepalive + option. Set 0 to disable. [Default: 0]' + type: string + wireguardListeningPort: + description: 'WireguardListeningPort controls the listening port used + by Wireguard. [Default: 51820]' + type: integer + wireguardMTU: + description: 'WireguardMTU controls the MTU on the Wireguard interface. + See Configuring MTU [Default: 1420]' + type: integer + wireguardRoutingRulePriority: + description: 'WireguardRoutingRulePriority controls the priority value + to use for the Wireguard routing rule. [Default: 99]' + type: integer + workloadSourceSpoofing: + description: WorkloadSourceSpoofing controls whether pods can use + the allowedSourcePrefixes annotation to send traffic with a source + IP address that is not theirs. This is disabled by default. When + set to "Any", pods can request any prefix. + type: string + xdpEnabled: + description: 'XDPEnabled enables XDP acceleration for suitable untracked + incoming deny rules. [Default: true]' + type: boolean + xdpRefreshInterval: + description: 'XDPRefreshInterval is the period at which Felix re-checks + all XDP state to ensure that no other process has accidentally broken + Calico''s BPF maps or attached programs. Set to 0 to disable XDP + refresh. [Default: 90s]' + type: string + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: globalnetworkpolicies.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: GlobalNetworkPolicy + listKind: GlobalNetworkPolicyList + plural: globalnetworkpolicies + singular: globalnetworkpolicy + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + properties: + applyOnForward: + description: ApplyOnForward indicates to apply the rules in this policy + on forward traffic. + type: boolean + doNotTrack: + description: DoNotTrack indicates whether packets matched by the rules + in this policy should go through the data plane's connection tracking, + such as Linux conntrack. If True, the rules in this policy are + applied before any data plane connection tracking, and packets allowed + by this policy are marked as not to be tracked. + type: boolean + egress: + description: The ordered set of egress rules. Each rule contains + a set of packet match criteria and a corresponding action to apply. + items: + description: "A Rule encapsulates a set of match criteria and an + action. Both selector-based security Policy and security Profiles + reference rules - separated out as a list of rules for both ingress + and egress packet matching. \n Each positive match criteria has + a negated version, prefixed with \"Not\". All the match criteria + within a rule must be satisfied for a packet to match. A single + rule can contain the positive and negative version of a match + and both must be satisfied for the rule to match." + properties: + action: + type: string + destination: + description: Destination contains the match criteria that apply + to destination entity. + properties: + namespaceSelector: + description: "NamespaceSelector is an optional field that + contains a selector expression. Only traffic that originates + from (or terminates at) endpoints within the selected + namespaces will be matched. When both NamespaceSelector + and another selector are defined on the same rule, then + only workload endpoints that are matched by both selectors + will be selected by the rule. \n For NetworkPolicy, an + empty NamespaceSelector implies that the Selector is limited + to selecting only workload endpoints in the same namespace + as the NetworkPolicy. \n For NetworkPolicy, `global()` + NamespaceSelector implies that the Selector is limited + to selecting only GlobalNetworkSet or HostEndpoint. \n + For GlobalNetworkPolicy, an empty NamespaceSelector implies + the Selector applies to workload endpoints across all + namespaces." + type: string + nets: + description: Nets is an optional field that restricts the + rule to only apply to traffic that originates from (or + terminates at) IP addresses in any of the given subnets. + items: + type: string + type: array + notNets: + description: NotNets is the negated version of the Nets + field. + items: + type: string + type: array + notPorts: + description: NotPorts is the negated version of the Ports + field. Since only some protocols have ports, if any ports + are specified it requires the Protocol match in the Rule + to be set to "TCP" or "UDP". + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + notSelector: + description: NotSelector is the negated version of the Selector + field. See Selector field for subtleties with negated + selectors. + type: string + ports: + description: "Ports is an optional field that restricts + the rule to only apply to traffic that has a source (destination) + port that matches one of these ranges/values. This value + is a list of integers or strings that represent ranges + of ports. \n Since only some protocols have ports, if + any ports are specified it requires the Protocol match + in the Rule to be set to \"TCP\" or \"UDP\"." + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + selector: + description: "Selector is an optional field that contains + a selector expression (see Policy for sample syntax). + \ Only traffic that originates from (terminates at) endpoints + matching the selector will be matched. \n Note that: in + addition to the negated version of the Selector (see NotSelector + below), the selector expression syntax itself supports + negation. The two types of negation are subtly different. + One negates the set of matched endpoints, the other negates + the whole match: \n \tSelector = \"!has(my_label)\" matches + packets that are from other Calico-controlled \tendpoints + that do not have the label \"my_label\". \n \tNotSelector + = \"has(my_label)\" matches packets that are not from + Calico-controlled \tendpoints that do have the label \"my_label\". + \n The effect is that the latter will accept packets from + non-Calico sources whereas the former is limited to packets + from Calico-controlled endpoints." + type: string + serviceAccounts: + description: ServiceAccounts is an optional field that restricts + the rule to only apply to traffic that originates from + (or terminates at) a pod running as a matching service + account. + properties: + names: + description: Names is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account whose name is in the list. + items: + type: string + type: array + selector: + description: Selector is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account that matches the given label selector. If + both Names and Selector are specified then they are + AND'ed. + type: string + type: object + services: + description: "Services is an optional field that contains + options for matching Kubernetes Services. If specified, + only traffic that originates from or terminates at endpoints + within the selected service(s) will be matched, and only + to/from each endpoint's port. \n Services cannot be specified + on the same rule as Selector, NotSelector, NamespaceSelector, + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." + properties: + name: + description: Name specifies the name of a Kubernetes + Service to match. + type: string + namespace: + description: Namespace specifies the namespace of the + given Service. If left empty, the rule will match + within this policy's namespace. + type: string + type: object + type: object + http: + description: HTTP contains match criteria that apply to HTTP + requests. + properties: + methods: + description: Methods is an optional field that restricts + the rule to apply only to HTTP requests that use one of + the listed HTTP Methods (e.g. GET, PUT, etc.) Multiple + methods are OR'd together. + items: + type: string + type: array + paths: + description: 'Paths is an optional field that restricts + the rule to apply to HTTP requests that use one of the + listed HTTP Paths. Multiple paths are OR''d together. + e.g: - exact: /foo - prefix: /bar NOTE: Each entry may + ONLY specify either a `exact` or a `prefix` match. The + validator will check for it.' + items: + description: 'HTTPPath specifies an HTTP path to match. + It may be either of the form: exact: : which matches + the path exactly or prefix: : which matches + the path prefix' + properties: + exact: + type: string + prefix: + type: string + type: object + type: array + type: object + icmp: + description: ICMP is an optional field that restricts the rule + to apply to a specific type and code of ICMP traffic. This + should only be specified if the Protocol field is set to "ICMP" + or "ICMPv6". + properties: + code: + description: Match on a specific ICMP code. If specified, + the Type value must also be specified. This is a technical + limitation imposed by the kernel's iptables firewall, + which Calico uses to enforce the rule. + type: integer + type: + description: Match on a specific ICMP type. For example + a value of 8 refers to ICMP Echo Request (i.e. pings). + type: integer + type: object + ipVersion: + description: IPVersion is an optional field that restricts the + rule to only match a specific IP version. + type: integer + metadata: + description: Metadata contains additional information for this + rule + properties: + annotations: + additionalProperties: + type: string + description: Annotations is a set of key value pairs that + give extra information about the rule + type: object + type: object + notICMP: + description: NotICMP is the negated version of the ICMP field. + properties: + code: + description: Match on a specific ICMP code. If specified, + the Type value must also be specified. This is a technical + limitation imposed by the kernel's iptables firewall, + which Calico uses to enforce the rule. + type: integer + type: + description: Match on a specific ICMP type. For example + a value of 8 refers to ICMP Echo Request (i.e. pings). + type: integer + type: object + notProtocol: + anyOf: + - type: integer + - type: string + description: NotProtocol is the negated version of the Protocol + field. + pattern: ^.* + x-kubernetes-int-or-string: true + protocol: + anyOf: + - type: integer + - type: string + description: "Protocol is an optional field that restricts the + rule to only apply to traffic of a specific IP protocol. Required + if any of the EntityRules contain Ports (because ports only + apply to certain protocols). \n Must be one of these string + values: \"TCP\", \"UDP\", \"ICMP\", \"ICMPv6\", \"SCTP\", + \"UDPLite\" or an integer in the range 1-255." + pattern: ^.* + x-kubernetes-int-or-string: true + source: + description: Source contains the match criteria that apply to + source entity. + properties: + namespaceSelector: + description: "NamespaceSelector is an optional field that + contains a selector expression. Only traffic that originates + from (or terminates at) endpoints within the selected + namespaces will be matched. When both NamespaceSelector + and another selector are defined on the same rule, then + only workload endpoints that are matched by both selectors + will be selected by the rule. \n For NetworkPolicy, an + empty NamespaceSelector implies that the Selector is limited + to selecting only workload endpoints in the same namespace + as the NetworkPolicy. \n For NetworkPolicy, `global()` + NamespaceSelector implies that the Selector is limited + to selecting only GlobalNetworkSet or HostEndpoint. \n + For GlobalNetworkPolicy, an empty NamespaceSelector implies + the Selector applies to workload endpoints across all + namespaces." + type: string + nets: + description: Nets is an optional field that restricts the + rule to only apply to traffic that originates from (or + terminates at) IP addresses in any of the given subnets. + items: + type: string + type: array + notNets: + description: NotNets is the negated version of the Nets + field. + items: + type: string + type: array + notPorts: + description: NotPorts is the negated version of the Ports + field. Since only some protocols have ports, if any ports + are specified it requires the Protocol match in the Rule + to be set to "TCP" or "UDP". + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + notSelector: + description: NotSelector is the negated version of the Selector + field. See Selector field for subtleties with negated + selectors. + type: string + ports: + description: "Ports is an optional field that restricts + the rule to only apply to traffic that has a source (destination) + port that matches one of these ranges/values. This value + is a list of integers or strings that represent ranges + of ports. \n Since only some protocols have ports, if + any ports are specified it requires the Protocol match + in the Rule to be set to \"TCP\" or \"UDP\"." + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + selector: + description: "Selector is an optional field that contains + a selector expression (see Policy for sample syntax). + \ Only traffic that originates from (terminates at) endpoints + matching the selector will be matched. \n Note that: in + addition to the negated version of the Selector (see NotSelector + below), the selector expression syntax itself supports + negation. The two types of negation are subtly different. + One negates the set of matched endpoints, the other negates + the whole match: \n \tSelector = \"!has(my_label)\" matches + packets that are from other Calico-controlled \tendpoints + that do not have the label \"my_label\". \n \tNotSelector + = \"has(my_label)\" matches packets that are not from + Calico-controlled \tendpoints that do have the label \"my_label\". + \n The effect is that the latter will accept packets from + non-Calico sources whereas the former is limited to packets + from Calico-controlled endpoints." + type: string + serviceAccounts: + description: ServiceAccounts is an optional field that restricts + the rule to only apply to traffic that originates from + (or terminates at) a pod running as a matching service + account. + properties: + names: + description: Names is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account whose name is in the list. + items: + type: string + type: array + selector: + description: Selector is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account that matches the given label selector. If + both Names and Selector are specified then they are + AND'ed. + type: string + type: object + services: + description: "Services is an optional field that contains + options for matching Kubernetes Services. If specified, + only traffic that originates from or terminates at endpoints + within the selected service(s) will be matched, and only + to/from each endpoint's port. \n Services cannot be specified + on the same rule as Selector, NotSelector, NamespaceSelector, + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." + properties: + name: + description: Name specifies the name of a Kubernetes + Service to match. + type: string + namespace: + description: Namespace specifies the namespace of the + given Service. If left empty, the rule will match + within this policy's namespace. + type: string + type: object + type: object + required: + - action + type: object + type: array + ingress: + description: The ordered set of ingress rules. Each rule contains + a set of packet match criteria and a corresponding action to apply. + items: + description: "A Rule encapsulates a set of match criteria and an + action. Both selector-based security Policy and security Profiles + reference rules - separated out as a list of rules for both ingress + and egress packet matching. \n Each positive match criteria has + a negated version, prefixed with \"Not\". All the match criteria + within a rule must be satisfied for a packet to match. A single + rule can contain the positive and negative version of a match + and both must be satisfied for the rule to match." + properties: + action: + type: string + destination: + description: Destination contains the match criteria that apply + to destination entity. + properties: + namespaceSelector: + description: "NamespaceSelector is an optional field that + contains a selector expression. Only traffic that originates + from (or terminates at) endpoints within the selected + namespaces will be matched. When both NamespaceSelector + and another selector are defined on the same rule, then + only workload endpoints that are matched by both selectors + will be selected by the rule. \n For NetworkPolicy, an + empty NamespaceSelector implies that the Selector is limited + to selecting only workload endpoints in the same namespace + as the NetworkPolicy. \n For NetworkPolicy, `global()` + NamespaceSelector implies that the Selector is limited + to selecting only GlobalNetworkSet or HostEndpoint. \n + For GlobalNetworkPolicy, an empty NamespaceSelector implies + the Selector applies to workload endpoints across all + namespaces." + type: string + nets: + description: Nets is an optional field that restricts the + rule to only apply to traffic that originates from (or + terminates at) IP addresses in any of the given subnets. + items: + type: string + type: array + notNets: + description: NotNets is the negated version of the Nets + field. + items: + type: string + type: array + notPorts: + description: NotPorts is the negated version of the Ports + field. Since only some protocols have ports, if any ports + are specified it requires the Protocol match in the Rule + to be set to "TCP" or "UDP". + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + notSelector: + description: NotSelector is the negated version of the Selector + field. See Selector field for subtleties with negated + selectors. + type: string + ports: + description: "Ports is an optional field that restricts + the rule to only apply to traffic that has a source (destination) + port that matches one of these ranges/values. This value + is a list of integers or strings that represent ranges + of ports. \n Since only some protocols have ports, if + any ports are specified it requires the Protocol match + in the Rule to be set to \"TCP\" or \"UDP\"." + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + selector: + description: "Selector is an optional field that contains + a selector expression (see Policy for sample syntax). + \ Only traffic that originates from (terminates at) endpoints + matching the selector will be matched. \n Note that: in + addition to the negated version of the Selector (see NotSelector + below), the selector expression syntax itself supports + negation. The two types of negation are subtly different. + One negates the set of matched endpoints, the other negates + the whole match: \n \tSelector = \"!has(my_label)\" matches + packets that are from other Calico-controlled \tendpoints + that do not have the label \"my_label\". \n \tNotSelector + = \"has(my_label)\" matches packets that are not from + Calico-controlled \tendpoints that do have the label \"my_label\". + \n The effect is that the latter will accept packets from + non-Calico sources whereas the former is limited to packets + from Calico-controlled endpoints." + type: string + serviceAccounts: + description: ServiceAccounts is an optional field that restricts + the rule to only apply to traffic that originates from + (or terminates at) a pod running as a matching service + account. + properties: + names: + description: Names is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account whose name is in the list. + items: + type: string + type: array + selector: + description: Selector is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account that matches the given label selector. If + both Names and Selector are specified then they are + AND'ed. + type: string + type: object + services: + description: "Services is an optional field that contains + options for matching Kubernetes Services. If specified, + only traffic that originates from or terminates at endpoints + within the selected service(s) will be matched, and only + to/from each endpoint's port. \n Services cannot be specified + on the same rule as Selector, NotSelector, NamespaceSelector, + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." + properties: + name: + description: Name specifies the name of a Kubernetes + Service to match. + type: string + namespace: + description: Namespace specifies the namespace of the + given Service. If left empty, the rule will match + within this policy's namespace. + type: string + type: object + type: object + http: + description: HTTP contains match criteria that apply to HTTP + requests. + properties: + methods: + description: Methods is an optional field that restricts + the rule to apply only to HTTP requests that use one of + the listed HTTP Methods (e.g. GET, PUT, etc.) Multiple + methods are OR'd together. + items: + type: string + type: array + paths: + description: 'Paths is an optional field that restricts + the rule to apply to HTTP requests that use one of the + listed HTTP Paths. Multiple paths are OR''d together. + e.g: - exact: /foo - prefix: /bar NOTE: Each entry may + ONLY specify either a `exact` or a `prefix` match. The + validator will check for it.' + items: + description: 'HTTPPath specifies an HTTP path to match. + It may be either of the form: exact: : which matches + the path exactly or prefix: : which matches + the path prefix' + properties: + exact: + type: string + prefix: + type: string + type: object + type: array + type: object + icmp: + description: ICMP is an optional field that restricts the rule + to apply to a specific type and code of ICMP traffic. This + should only be specified if the Protocol field is set to "ICMP" + or "ICMPv6". + properties: + code: + description: Match on a specific ICMP code. If specified, + the Type value must also be specified. This is a technical + limitation imposed by the kernel's iptables firewall, + which Calico uses to enforce the rule. + type: integer + type: + description: Match on a specific ICMP type. For example + a value of 8 refers to ICMP Echo Request (i.e. pings). + type: integer + type: object + ipVersion: + description: IPVersion is an optional field that restricts the + rule to only match a specific IP version. + type: integer + metadata: + description: Metadata contains additional information for this + rule + properties: + annotations: + additionalProperties: + type: string + description: Annotations is a set of key value pairs that + give extra information about the rule + type: object + type: object + notICMP: + description: NotICMP is the negated version of the ICMP field. + properties: + code: + description: Match on a specific ICMP code. If specified, + the Type value must also be specified. This is a technical + limitation imposed by the kernel's iptables firewall, + which Calico uses to enforce the rule. + type: integer + type: + description: Match on a specific ICMP type. For example + a value of 8 refers to ICMP Echo Request (i.e. pings). + type: integer + type: object + notProtocol: + anyOf: + - type: integer + - type: string + description: NotProtocol is the negated version of the Protocol + field. + pattern: ^.* + x-kubernetes-int-or-string: true + protocol: + anyOf: + - type: integer + - type: string + description: "Protocol is an optional field that restricts the + rule to only apply to traffic of a specific IP protocol. Required + if any of the EntityRules contain Ports (because ports only + apply to certain protocols). \n Must be one of these string + values: \"TCP\", \"UDP\", \"ICMP\", \"ICMPv6\", \"SCTP\", + \"UDPLite\" or an integer in the range 1-255." + pattern: ^.* + x-kubernetes-int-or-string: true + source: + description: Source contains the match criteria that apply to + source entity. + properties: + namespaceSelector: + description: "NamespaceSelector is an optional field that + contains a selector expression. Only traffic that originates + from (or terminates at) endpoints within the selected + namespaces will be matched. When both NamespaceSelector + and another selector are defined on the same rule, then + only workload endpoints that are matched by both selectors + will be selected by the rule. \n For NetworkPolicy, an + empty NamespaceSelector implies that the Selector is limited + to selecting only workload endpoints in the same namespace + as the NetworkPolicy. \n For NetworkPolicy, `global()` + NamespaceSelector implies that the Selector is limited + to selecting only GlobalNetworkSet or HostEndpoint. \n + For GlobalNetworkPolicy, an empty NamespaceSelector implies + the Selector applies to workload endpoints across all + namespaces." + type: string + nets: + description: Nets is an optional field that restricts the + rule to only apply to traffic that originates from (or + terminates at) IP addresses in any of the given subnets. + items: + type: string + type: array + notNets: + description: NotNets is the negated version of the Nets + field. + items: + type: string + type: array + notPorts: + description: NotPorts is the negated version of the Ports + field. Since only some protocols have ports, if any ports + are specified it requires the Protocol match in the Rule + to be set to "TCP" or "UDP". + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + notSelector: + description: NotSelector is the negated version of the Selector + field. See Selector field for subtleties with negated + selectors. + type: string + ports: + description: "Ports is an optional field that restricts + the rule to only apply to traffic that has a source (destination) + port that matches one of these ranges/values. This value + is a list of integers or strings that represent ranges + of ports. \n Since only some protocols have ports, if + any ports are specified it requires the Protocol match + in the Rule to be set to \"TCP\" or \"UDP\"." + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + selector: + description: "Selector is an optional field that contains + a selector expression (see Policy for sample syntax). + \ Only traffic that originates from (terminates at) endpoints + matching the selector will be matched. \n Note that: in + addition to the negated version of the Selector (see NotSelector + below), the selector expression syntax itself supports + negation. The two types of negation are subtly different. + One negates the set of matched endpoints, the other negates + the whole match: \n \tSelector = \"!has(my_label)\" matches + packets that are from other Calico-controlled \tendpoints + that do not have the label \"my_label\". \n \tNotSelector + = \"has(my_label)\" matches packets that are not from + Calico-controlled \tendpoints that do have the label \"my_label\". + \n The effect is that the latter will accept packets from + non-Calico sources whereas the former is limited to packets + from Calico-controlled endpoints." + type: string + serviceAccounts: + description: ServiceAccounts is an optional field that restricts + the rule to only apply to traffic that originates from + (or terminates at) a pod running as a matching service + account. + properties: + names: + description: Names is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account whose name is in the list. + items: + type: string + type: array + selector: + description: Selector is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account that matches the given label selector. If + both Names and Selector are specified then they are + AND'ed. + type: string + type: object + services: + description: "Services is an optional field that contains + options for matching Kubernetes Services. If specified, + only traffic that originates from or terminates at endpoints + within the selected service(s) will be matched, and only + to/from each endpoint's port. \n Services cannot be specified + on the same rule as Selector, NotSelector, NamespaceSelector, + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." + properties: + name: + description: Name specifies the name of a Kubernetes + Service to match. + type: string + namespace: + description: Namespace specifies the namespace of the + given Service. If left empty, the rule will match + within this policy's namespace. + type: string + type: object + type: object + required: + - action + type: object + type: array + namespaceSelector: + description: NamespaceSelector is an optional field for an expression + used to select a pod based on namespaces. + type: string + order: + description: Order is an optional field that specifies the order in + which the policy is applied. Policies with higher "order" are applied + after those with lower order. If the order is omitted, it may be + considered to be "infinite" - i.e. the policy will be applied last. Policies + with identical order will be applied in alphanumerical order based + on the Policy "Name". + type: number + preDNAT: + description: PreDNAT indicates to apply the rules in this policy before + any DNAT. + type: boolean + selector: + description: "The selector is an expression used to pick pick out + the endpoints that the policy should be applied to. \n Selector + expressions follow this syntax: \n \tlabel == \"string_literal\" + \ -> comparison, e.g. my_label == \"foo bar\" \tlabel != \"string_literal\" + \ -> not equal; also matches if label is not present \tlabel in + { \"a\", \"b\", \"c\", ... } -> true if the value of label X is + one of \"a\", \"b\", \"c\" \tlabel not in { \"a\", \"b\", \"c\", + ... } -> true if the value of label X is not one of \"a\", \"b\", + \"c\" \thas(label_name) -> True if that label is present \t! expr + -> negation of expr \texpr && expr -> Short-circuit and \texpr + || expr -> Short-circuit or \t( expr ) -> parens for grouping \tall() + or the empty selector -> matches all endpoints. \n Label names are + allowed to contain alphanumerics, -, _ and /. String literals are + more permissive but they do not support escape characters. \n Examples + (with made-up labels): \n \ttype == \"webserver\" && deployment + == \"prod\" \ttype in {\"frontend\", \"backend\"} \tdeployment != + \"dev\" \t! has(label_name)" + type: string + serviceAccountSelector: + description: ServiceAccountSelector is an optional field for an expression + used to select a pod based on service accounts. + type: string + types: + description: "Types indicates whether this policy applies to ingress, + or to egress, or to both. When not explicitly specified (and so + the value on creation is empty or nil), Calico defaults Types according + to what Ingress and Egress rules are present in the policy. The + default is: \n - [ PolicyTypeIngress ], if there are no Egress rules + (including the case where there are also no Ingress rules) \n + - [ PolicyTypeEgress ], if there are Egress rules but no Ingress + rules \n - [ PolicyTypeIngress, PolicyTypeEgress ], if there are + both Ingress and Egress rules. \n When the policy is read back again, + Types will always be one of these values, never empty or nil." + items: + description: PolicyType enumerates the possible values of the PolicySpec + Types field. + type: string + type: array + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: globalnetworksets.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: GlobalNetworkSet + listKind: GlobalNetworkSetList + plural: globalnetworksets + singular: globalnetworkset + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + description: GlobalNetworkSet contains a set of arbitrary IP sub-networks/CIDRs + that share labels to allow rules to refer to them via selectors. The labels + of GlobalNetworkSet are not namespaced. + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: GlobalNetworkSetSpec contains the specification for a NetworkSet + resource. + properties: + nets: + description: The list of IP networks that belong to this set. + items: + type: string + type: array + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: hostendpoints.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: HostEndpoint + listKind: HostEndpointList + plural: hostendpoints + singular: hostendpoint + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: HostEndpointSpec contains the specification for a HostEndpoint + resource. + properties: + expectedIPs: + description: "The expected IP addresses (IPv4 and IPv6) of the endpoint. + If \"InterfaceName\" is not present, Calico will look for an interface + matching any of the IPs in the list and apply policy to that. Note: + \tWhen using the selector match criteria in an ingress or egress + security Policy \tor Profile, Calico converts the selector into + a set of IP addresses. For host \tendpoints, the ExpectedIPs field + is used for that purpose. (If only the interface \tname is specified, + Calico does not learn the IPs of the interface for use in match + \tcriteria.)" + items: + type: string + type: array + interfaceName: + description: "Either \"*\", or the name of a specific Linux interface + to apply policy to; or empty. \"*\" indicates that this HostEndpoint + governs all traffic to, from or through the default network namespace + of the host named by the \"Node\" field; entering and leaving that + namespace via any interface, including those from/to non-host-networked + local workloads. \n If InterfaceName is not \"*\", this HostEndpoint + only governs traffic that enters or leaves the host through the + specific interface named by InterfaceName, or - when InterfaceName + is empty - through the specific interface that has one of the IPs + in ExpectedIPs. Therefore, when InterfaceName is empty, at least + one expected IP must be specified. Only external interfaces (such + as \"eth0\") are supported here; it isn't possible for a HostEndpoint + to protect traffic through a specific local workload interface. + \n Note: Only some kinds of policy are implemented for \"*\" HostEndpoints; + initially just pre-DNAT policy. Please check Calico documentation + for the latest position." + type: string + node: + description: The node name identifying the Calico node instance. + type: string + ports: + description: Ports contains the endpoint's named ports, which may + be referenced in security policy rules. + items: + properties: + name: + type: string + port: + type: integer + protocol: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + required: + - name + - port + - protocol + type: object + type: array + profiles: + description: A list of identifiers of security Profile objects that + apply to this endpoint. Each profile is applied in the order that + they appear in this list. Profile rules are applied after the selector-based + security policy. + items: + type: string + type: array + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: ipamblocks.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: IPAMBlock + listKind: IPAMBlockList + plural: ipamblocks + singular: ipamblock + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: IPAMBlockSpec contains the specification for an IPAMBlock + resource. + properties: + affinity: + description: Affinity of the block, if this block has one. If set, + it will be of the form "host:". If not set, this block + is not affine to a host. + type: string + allocations: + description: Array of allocations in-use within this block. nil entries + mean the allocation is free. For non-nil entries at index i, the + index is the ordinal of the allocation within this block and the + value is the index of the associated attributes in the Attributes + array. + items: + type: integer + # TODO: This nullable is manually added in. We should update controller-gen + # to handle []*int properly itself. + nullable: true + type: array + attributes: + description: Attributes is an array of arbitrary metadata associated + with allocations in the block. To find attributes for a given allocation, + use the value of the allocation's entry in the Allocations array + as the index of the element in this array. + items: + properties: + handle_id: + type: string + secondary: + additionalProperties: + type: string + type: object + type: object + type: array + cidr: + description: The block's CIDR. + type: string + deleted: + description: Deleted is an internal boolean used to workaround a limitation + in the Kubernetes API whereby deletion will not return a conflict + error if the block has been updated. It should not be set manually. + type: boolean + sequenceNumber: + default: 0 + description: We store a sequence number that is updated each time + the block is written. Each allocation will also store the sequence + number of the block at the time of its creation. When releasing + an IP, passing the sequence number associated with the allocation + allows us to protect against a race condition and ensure the IP + hasn't been released and re-allocated since the release request. + format: int64 + type: integer + sequenceNumberForAllocation: + additionalProperties: + format: int64 + type: integer + description: Map of allocated ordinal within the block to sequence + number of the block at the time of allocation. Kubernetes does not + allow numerical keys for maps, so the key is cast to a string. + type: object + strictAffinity: + description: StrictAffinity on the IPAMBlock is deprecated and no + longer used by the code. Use IPAMConfig StrictAffinity instead. + type: boolean + unallocated: + description: Unallocated is an ordered list of allocations which are + free in the block. + items: + type: integer + type: array + required: + - allocations + - attributes + - cidr + - strictAffinity + - unallocated + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: ipamconfigs.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: IPAMConfig + listKind: IPAMConfigList + plural: ipamconfigs + singular: ipamconfig + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: IPAMConfigSpec contains the specification for an IPAMConfig + resource. + properties: + autoAllocateBlocks: + type: boolean + maxBlocksPerHost: + description: MaxBlocksPerHost, if non-zero, is the max number of blocks + that can be affine to each host. + type: integer + strictAffinity: + type: boolean + required: + - autoAllocateBlocks + - strictAffinity + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: ipamhandles.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: IPAMHandle + listKind: IPAMHandleList + plural: ipamhandles + singular: ipamhandle + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: IPAMHandleSpec contains the specification for an IPAMHandle + resource. + properties: + block: + additionalProperties: + type: integer + type: object + deleted: + type: boolean + handleID: + type: string + required: + - block + - handleID + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: ippools.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: IPPool + listKind: IPPoolList + plural: ippools + singular: ippool + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: IPPoolSpec contains the specification for an IPPool resource. + properties: + allowedUses: + description: AllowedUse controls what the IP pool will be used for. If + not specified or empty, defaults to ["Tunnel", "Workload"] for back-compatibility + items: + type: string + type: array + blockSize: + description: The block size to use for IP address assignments from + this pool. Defaults to 26 for IPv4 and 122 for IPv6. + type: integer + cidr: + description: The pool CIDR. + type: string + disableBGPExport: + description: 'Disable exporting routes from this IP Pool''s CIDR over + BGP. [Default: false]' + type: boolean + disabled: + description: When disabled is true, Calico IPAM will not assign addresses + from this pool. + type: boolean + ipip: + description: 'Deprecated: this field is only used for APIv1 backwards + compatibility. Setting this field is not allowed, this field is + for internal use only.' + properties: + enabled: + description: When enabled is true, ipip tunneling will be used + to deliver packets to destinations within this pool. + type: boolean + mode: + description: The IPIP mode. This can be one of "always" or "cross-subnet". A + mode of "always" will also use IPIP tunneling for routing to + destination IP addresses within this pool. A mode of "cross-subnet" + will only use IPIP tunneling when the destination node is on + a different subnet to the originating node. The default value + (if not specified) is "always". + type: string + type: object + ipipMode: + description: Contains configuration for IPIP tunneling for this pool. + If not specified, then this is defaulted to "Never" (i.e. IPIP tunneling + is disabled). + type: string + nat-outgoing: + description: 'Deprecated: this field is only used for APIv1 backwards + compatibility. Setting this field is not allowed, this field is + for internal use only.' + type: boolean + natOutgoing: + description: When nat-outgoing is true, packets sent from Calico networked + containers in this pool to destinations outside of this pool will + be masqueraded. + type: boolean + nodeSelector: + description: Allows IPPool to allocate for a specific node by label + selector. + type: string + vxlanMode: + description: Contains configuration for VXLAN tunneling for this pool. + If not specified, then this is defaulted to "Never" (i.e. VXLAN + tunneling is disabled). + type: string + required: + - cidr + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: (devel) + creationTimestamp: null + name: ipreservations.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: IPReservation + listKind: IPReservationList + plural: ipreservations + singular: ipreservation + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: IPReservationSpec contains the specification for an IPReservation + resource. + properties: + reservedCIDRs: + description: ReservedCIDRs is a list of CIDRs and/or IP addresses + that Calico IPAM will exclude from new allocations. + items: + type: string + type: array + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: kubecontrollersconfigurations.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: KubeControllersConfiguration + listKind: KubeControllersConfigurationList + plural: kubecontrollersconfigurations + singular: kubecontrollersconfiguration + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: KubeControllersConfigurationSpec contains the values of the + Kubernetes controllers configuration. + properties: + controllers: + description: Controllers enables and configures individual Kubernetes + controllers + properties: + namespace: + description: Namespace enables and configures the namespace controller. + Enabled by default, set to nil to disable. + properties: + reconcilerPeriod: + description: 'ReconcilerPeriod is the period to perform reconciliation + with the Calico datastore. [Default: 5m]' + type: string + type: object + node: + description: Node enables and configures the node controller. + Enabled by default, set to nil to disable. + properties: + hostEndpoint: + description: HostEndpoint controls syncing nodes to host endpoints. + Disabled by default, set to nil to disable. + properties: + autoCreate: + description: 'AutoCreate enables automatic creation of + host endpoints for every node. [Default: Disabled]' + type: string + type: object + leakGracePeriod: + description: 'LeakGracePeriod is the period used by the controller + to determine if an IP address has been leaked. Set to 0 + to disable IP garbage collection. [Default: 15m]' + type: string + reconcilerPeriod: + description: 'ReconcilerPeriod is the period to perform reconciliation + with the Calico datastore. [Default: 5m]' + type: string + syncLabels: + description: 'SyncLabels controls whether to copy Kubernetes + node labels to Calico nodes. [Default: Enabled]' + type: string + type: object + policy: + description: Policy enables and configures the policy controller. + Enabled by default, set to nil to disable. + properties: + reconcilerPeriod: + description: 'ReconcilerPeriod is the period to perform reconciliation + with the Calico datastore. [Default: 5m]' + type: string + type: object + serviceAccount: + description: ServiceAccount enables and configures the service + account controller. Enabled by default, set to nil to disable. + properties: + reconcilerPeriod: + description: 'ReconcilerPeriod is the period to perform reconciliation + with the Calico datastore. [Default: 5m]' + type: string + type: object + workloadEndpoint: + description: WorkloadEndpoint enables and configures the workload + endpoint controller. Enabled by default, set to nil to disable. + properties: + reconcilerPeriod: + description: 'ReconcilerPeriod is the period to perform reconciliation + with the Calico datastore. [Default: 5m]' + type: string + type: object + type: object + debugProfilePort: + description: DebugProfilePort configures the port to serve memory + and cpu profiles on. If not specified, profiling is disabled. + format: int32 + type: integer + etcdV3CompactionPeriod: + description: 'EtcdV3CompactionPeriod is the period between etcdv3 + compaction requests. Set to 0 to disable. [Default: 10m]' + type: string + healthChecks: + description: 'HealthChecks enables or disables support for health + checks [Default: Enabled]' + type: string + logSeverityScreen: + description: 'LogSeverityScreen is the log severity above which logs + are sent to the stdout. [Default: Info]' + type: string + prometheusMetricsPort: + description: 'PrometheusMetricsPort is the TCP port that the Prometheus + metrics server should bind to. Set to 0 to disable. [Default: 9094]' + type: integer + required: + - controllers + type: object + status: + description: KubeControllersConfigurationStatus represents the status + of the configuration. It's useful for admins to be able to see the actual + config that was applied, which can be modified by environment variables + on the kube-controllers process. + properties: + environmentVars: + additionalProperties: + type: string + description: EnvironmentVars contains the environment variables on + the kube-controllers that influenced the RunningConfig. + type: object + runningConfig: + description: RunningConfig contains the effective config that is running + in the kube-controllers pod, after merging the API resource with + any environment variables. + properties: + controllers: + description: Controllers enables and configures individual Kubernetes + controllers + properties: + namespace: + description: Namespace enables and configures the namespace + controller. Enabled by default, set to nil to disable. + properties: + reconcilerPeriod: + description: 'ReconcilerPeriod is the period to perform + reconciliation with the Calico datastore. [Default: + 5m]' + type: string + type: object + node: + description: Node enables and configures the node controller. + Enabled by default, set to nil to disable. + properties: + hostEndpoint: + description: HostEndpoint controls syncing nodes to host + endpoints. Disabled by default, set to nil to disable. + properties: + autoCreate: + description: 'AutoCreate enables automatic creation + of host endpoints for every node. [Default: Disabled]' + type: string + type: object + leakGracePeriod: + description: 'LeakGracePeriod is the period used by the + controller to determine if an IP address has been leaked. + Set to 0 to disable IP garbage collection. [Default: + 15m]' + type: string + reconcilerPeriod: + description: 'ReconcilerPeriod is the period to perform + reconciliation with the Calico datastore. [Default: + 5m]' + type: string + syncLabels: + description: 'SyncLabels controls whether to copy Kubernetes + node labels to Calico nodes. [Default: Enabled]' + type: string + type: object + policy: + description: Policy enables and configures the policy controller. + Enabled by default, set to nil to disable. + properties: + reconcilerPeriod: + description: 'ReconcilerPeriod is the period to perform + reconciliation with the Calico datastore. [Default: + 5m]' + type: string + type: object + serviceAccount: + description: ServiceAccount enables and configures the service + account controller. Enabled by default, set to nil to disable. + properties: + reconcilerPeriod: + description: 'ReconcilerPeriod is the period to perform + reconciliation with the Calico datastore. [Default: + 5m]' + type: string + type: object + workloadEndpoint: + description: WorkloadEndpoint enables and configures the workload + endpoint controller. Enabled by default, set to nil to disable. + properties: + reconcilerPeriod: + description: 'ReconcilerPeriod is the period to perform + reconciliation with the Calico datastore. [Default: + 5m]' + type: string + type: object + type: object + debugProfilePort: + description: DebugProfilePort configures the port to serve memory + and cpu profiles on. If not specified, profiling is disabled. + format: int32 + type: integer + etcdV3CompactionPeriod: + description: 'EtcdV3CompactionPeriod is the period between etcdv3 + compaction requests. Set to 0 to disable. [Default: 10m]' + type: string + healthChecks: + description: 'HealthChecks enables or disables support for health + checks [Default: Enabled]' + type: string + logSeverityScreen: + description: 'LogSeverityScreen is the log severity above which + logs are sent to the stdout. [Default: Info]' + type: string + prometheusMetricsPort: + description: 'PrometheusMetricsPort is the TCP port that the Prometheus + metrics server should bind to. Set to 0 to disable. [Default: + 9094]' + type: integer + required: + - controllers + type: object + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: networkpolicies.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: NetworkPolicy + listKind: NetworkPolicyList + plural: networkpolicies + singular: networkpolicy + scope: Namespaced + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + properties: + egress: + description: The ordered set of egress rules. Each rule contains + a set of packet match criteria and a corresponding action to apply. + items: + description: "A Rule encapsulates a set of match criteria and an + action. Both selector-based security Policy and security Profiles + reference rules - separated out as a list of rules for both ingress + and egress packet matching. \n Each positive match criteria has + a negated version, prefixed with \"Not\". All the match criteria + within a rule must be satisfied for a packet to match. A single + rule can contain the positive and negative version of a match + and both must be satisfied for the rule to match." + properties: + action: + type: string + destination: + description: Destination contains the match criteria that apply + to destination entity. + properties: + namespaceSelector: + description: "NamespaceSelector is an optional field that + contains a selector expression. Only traffic that originates + from (or terminates at) endpoints within the selected + namespaces will be matched. When both NamespaceSelector + and another selector are defined on the same rule, then + only workload endpoints that are matched by both selectors + will be selected by the rule. \n For NetworkPolicy, an + empty NamespaceSelector implies that the Selector is limited + to selecting only workload endpoints in the same namespace + as the NetworkPolicy. \n For NetworkPolicy, `global()` + NamespaceSelector implies that the Selector is limited + to selecting only GlobalNetworkSet or HostEndpoint. \n + For GlobalNetworkPolicy, an empty NamespaceSelector implies + the Selector applies to workload endpoints across all + namespaces." + type: string + nets: + description: Nets is an optional field that restricts the + rule to only apply to traffic that originates from (or + terminates at) IP addresses in any of the given subnets. + items: + type: string + type: array + notNets: + description: NotNets is the negated version of the Nets + field. + items: + type: string + type: array + notPorts: + description: NotPorts is the negated version of the Ports + field. Since only some protocols have ports, if any ports + are specified it requires the Protocol match in the Rule + to be set to "TCP" or "UDP". + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + notSelector: + description: NotSelector is the negated version of the Selector + field. See Selector field for subtleties with negated + selectors. + type: string + ports: + description: "Ports is an optional field that restricts + the rule to only apply to traffic that has a source (destination) + port that matches one of these ranges/values. This value + is a list of integers or strings that represent ranges + of ports. \n Since only some protocols have ports, if + any ports are specified it requires the Protocol match + in the Rule to be set to \"TCP\" or \"UDP\"." + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + selector: + description: "Selector is an optional field that contains + a selector expression (see Policy for sample syntax). + \ Only traffic that originates from (terminates at) endpoints + matching the selector will be matched. \n Note that: in + addition to the negated version of the Selector (see NotSelector + below), the selector expression syntax itself supports + negation. The two types of negation are subtly different. + One negates the set of matched endpoints, the other negates + the whole match: \n \tSelector = \"!has(my_label)\" matches + packets that are from other Calico-controlled \tendpoints + that do not have the label \"my_label\". \n \tNotSelector + = \"has(my_label)\" matches packets that are not from + Calico-controlled \tendpoints that do have the label \"my_label\". + \n The effect is that the latter will accept packets from + non-Calico sources whereas the former is limited to packets + from Calico-controlled endpoints." + type: string + serviceAccounts: + description: ServiceAccounts is an optional field that restricts + the rule to only apply to traffic that originates from + (or terminates at) a pod running as a matching service + account. + properties: + names: + description: Names is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account whose name is in the list. + items: + type: string + type: array + selector: + description: Selector is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account that matches the given label selector. If + both Names and Selector are specified then they are + AND'ed. + type: string + type: object + services: + description: "Services is an optional field that contains + options for matching Kubernetes Services. If specified, + only traffic that originates from or terminates at endpoints + within the selected service(s) will be matched, and only + to/from each endpoint's port. \n Services cannot be specified + on the same rule as Selector, NotSelector, NamespaceSelector, + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." + properties: + name: + description: Name specifies the name of a Kubernetes + Service to match. + type: string + namespace: + description: Namespace specifies the namespace of the + given Service. If left empty, the rule will match + within this policy's namespace. + type: string + type: object + type: object + http: + description: HTTP contains match criteria that apply to HTTP + requests. + properties: + methods: + description: Methods is an optional field that restricts + the rule to apply only to HTTP requests that use one of + the listed HTTP Methods (e.g. GET, PUT, etc.) Multiple + methods are OR'd together. + items: + type: string + type: array + paths: + description: 'Paths is an optional field that restricts + the rule to apply to HTTP requests that use one of the + listed HTTP Paths. Multiple paths are OR''d together. + e.g: - exact: /foo - prefix: /bar NOTE: Each entry may + ONLY specify either a `exact` or a `prefix` match. The + validator will check for it.' + items: + description: 'HTTPPath specifies an HTTP path to match. + It may be either of the form: exact: : which matches + the path exactly or prefix: : which matches + the path prefix' + properties: + exact: + type: string + prefix: + type: string + type: object + type: array + type: object + icmp: + description: ICMP is an optional field that restricts the rule + to apply to a specific type and code of ICMP traffic. This + should only be specified if the Protocol field is set to "ICMP" + or "ICMPv6". + properties: + code: + description: Match on a specific ICMP code. If specified, + the Type value must also be specified. This is a technical + limitation imposed by the kernel's iptables firewall, + which Calico uses to enforce the rule. + type: integer + type: + description: Match on a specific ICMP type. For example + a value of 8 refers to ICMP Echo Request (i.e. pings). + type: integer + type: object + ipVersion: + description: IPVersion is an optional field that restricts the + rule to only match a specific IP version. + type: integer + metadata: + description: Metadata contains additional information for this + rule + properties: + annotations: + additionalProperties: + type: string + description: Annotations is a set of key value pairs that + give extra information about the rule + type: object + type: object + notICMP: + description: NotICMP is the negated version of the ICMP field. + properties: + code: + description: Match on a specific ICMP code. If specified, + the Type value must also be specified. This is a technical + limitation imposed by the kernel's iptables firewall, + which Calico uses to enforce the rule. + type: integer + type: + description: Match on a specific ICMP type. For example + a value of 8 refers to ICMP Echo Request (i.e. pings). + type: integer + type: object + notProtocol: + anyOf: + - type: integer + - type: string + description: NotProtocol is the negated version of the Protocol + field. + pattern: ^.* + x-kubernetes-int-or-string: true + protocol: + anyOf: + - type: integer + - type: string + description: "Protocol is an optional field that restricts the + rule to only apply to traffic of a specific IP protocol. Required + if any of the EntityRules contain Ports (because ports only + apply to certain protocols). \n Must be one of these string + values: \"TCP\", \"UDP\", \"ICMP\", \"ICMPv6\", \"SCTP\", + \"UDPLite\" or an integer in the range 1-255." + pattern: ^.* + x-kubernetes-int-or-string: true + source: + description: Source contains the match criteria that apply to + source entity. + properties: + namespaceSelector: + description: "NamespaceSelector is an optional field that + contains a selector expression. Only traffic that originates + from (or terminates at) endpoints within the selected + namespaces will be matched. When both NamespaceSelector + and another selector are defined on the same rule, then + only workload endpoints that are matched by both selectors + will be selected by the rule. \n For NetworkPolicy, an + empty NamespaceSelector implies that the Selector is limited + to selecting only workload endpoints in the same namespace + as the NetworkPolicy. \n For NetworkPolicy, `global()` + NamespaceSelector implies that the Selector is limited + to selecting only GlobalNetworkSet or HostEndpoint. \n + For GlobalNetworkPolicy, an empty NamespaceSelector implies + the Selector applies to workload endpoints across all + namespaces." + type: string + nets: + description: Nets is an optional field that restricts the + rule to only apply to traffic that originates from (or + terminates at) IP addresses in any of the given subnets. + items: + type: string + type: array + notNets: + description: NotNets is the negated version of the Nets + field. + items: + type: string + type: array + notPorts: + description: NotPorts is the negated version of the Ports + field. Since only some protocols have ports, if any ports + are specified it requires the Protocol match in the Rule + to be set to "TCP" or "UDP". + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + notSelector: + description: NotSelector is the negated version of the Selector + field. See Selector field for subtleties with negated + selectors. + type: string + ports: + description: "Ports is an optional field that restricts + the rule to only apply to traffic that has a source (destination) + port that matches one of these ranges/values. This value + is a list of integers or strings that represent ranges + of ports. \n Since only some protocols have ports, if + any ports are specified it requires the Protocol match + in the Rule to be set to \"TCP\" or \"UDP\"." + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + selector: + description: "Selector is an optional field that contains + a selector expression (see Policy for sample syntax). + \ Only traffic that originates from (terminates at) endpoints + matching the selector will be matched. \n Note that: in + addition to the negated version of the Selector (see NotSelector + below), the selector expression syntax itself supports + negation. The two types of negation are subtly different. + One negates the set of matched endpoints, the other negates + the whole match: \n \tSelector = \"!has(my_label)\" matches + packets that are from other Calico-controlled \tendpoints + that do not have the label \"my_label\". \n \tNotSelector + = \"has(my_label)\" matches packets that are not from + Calico-controlled \tendpoints that do have the label \"my_label\". + \n The effect is that the latter will accept packets from + non-Calico sources whereas the former is limited to packets + from Calico-controlled endpoints." + type: string + serviceAccounts: + description: ServiceAccounts is an optional field that restricts + the rule to only apply to traffic that originates from + (or terminates at) a pod running as a matching service + account. + properties: + names: + description: Names is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account whose name is in the list. + items: + type: string + type: array + selector: + description: Selector is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account that matches the given label selector. If + both Names and Selector are specified then they are + AND'ed. + type: string + type: object + services: + description: "Services is an optional field that contains + options for matching Kubernetes Services. If specified, + only traffic that originates from or terminates at endpoints + within the selected service(s) will be matched, and only + to/from each endpoint's port. \n Services cannot be specified + on the same rule as Selector, NotSelector, NamespaceSelector, + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." + properties: + name: + description: Name specifies the name of a Kubernetes + Service to match. + type: string + namespace: + description: Namespace specifies the namespace of the + given Service. If left empty, the rule will match + within this policy's namespace. + type: string + type: object + type: object + required: + - action + type: object + type: array + ingress: + description: The ordered set of ingress rules. Each rule contains + a set of packet match criteria and a corresponding action to apply. + items: + description: "A Rule encapsulates a set of match criteria and an + action. Both selector-based security Policy and security Profiles + reference rules - separated out as a list of rules for both ingress + and egress packet matching. \n Each positive match criteria has + a negated version, prefixed with \"Not\". All the match criteria + within a rule must be satisfied for a packet to match. A single + rule can contain the positive and negative version of a match + and both must be satisfied for the rule to match." + properties: + action: + type: string + destination: + description: Destination contains the match criteria that apply + to destination entity. + properties: + namespaceSelector: + description: "NamespaceSelector is an optional field that + contains a selector expression. Only traffic that originates + from (or terminates at) endpoints within the selected + namespaces will be matched. When both NamespaceSelector + and another selector are defined on the same rule, then + only workload endpoints that are matched by both selectors + will be selected by the rule. \n For NetworkPolicy, an + empty NamespaceSelector implies that the Selector is limited + to selecting only workload endpoints in the same namespace + as the NetworkPolicy. \n For NetworkPolicy, `global()` + NamespaceSelector implies that the Selector is limited + to selecting only GlobalNetworkSet or HostEndpoint. \n + For GlobalNetworkPolicy, an empty NamespaceSelector implies + the Selector applies to workload endpoints across all + namespaces." + type: string + nets: + description: Nets is an optional field that restricts the + rule to only apply to traffic that originates from (or + terminates at) IP addresses in any of the given subnets. + items: + type: string + type: array + notNets: + description: NotNets is the negated version of the Nets + field. + items: + type: string + type: array + notPorts: + description: NotPorts is the negated version of the Ports + field. Since only some protocols have ports, if any ports + are specified it requires the Protocol match in the Rule + to be set to "TCP" or "UDP". + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + notSelector: + description: NotSelector is the negated version of the Selector + field. See Selector field for subtleties with negated + selectors. + type: string + ports: + description: "Ports is an optional field that restricts + the rule to only apply to traffic that has a source (destination) + port that matches one of these ranges/values. This value + is a list of integers or strings that represent ranges + of ports. \n Since only some protocols have ports, if + any ports are specified it requires the Protocol match + in the Rule to be set to \"TCP\" or \"UDP\"." + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + selector: + description: "Selector is an optional field that contains + a selector expression (see Policy for sample syntax). + \ Only traffic that originates from (terminates at) endpoints + matching the selector will be matched. \n Note that: in + addition to the negated version of the Selector (see NotSelector + below), the selector expression syntax itself supports + negation. The two types of negation are subtly different. + One negates the set of matched endpoints, the other negates + the whole match: \n \tSelector = \"!has(my_label)\" matches + packets that are from other Calico-controlled \tendpoints + that do not have the label \"my_label\". \n \tNotSelector + = \"has(my_label)\" matches packets that are not from + Calico-controlled \tendpoints that do have the label \"my_label\". + \n The effect is that the latter will accept packets from + non-Calico sources whereas the former is limited to packets + from Calico-controlled endpoints." + type: string + serviceAccounts: + description: ServiceAccounts is an optional field that restricts + the rule to only apply to traffic that originates from + (or terminates at) a pod running as a matching service + account. + properties: + names: + description: Names is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account whose name is in the list. + items: + type: string + type: array + selector: + description: Selector is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account that matches the given label selector. If + both Names and Selector are specified then they are + AND'ed. + type: string + type: object + services: + description: "Services is an optional field that contains + options for matching Kubernetes Services. If specified, + only traffic that originates from or terminates at endpoints + within the selected service(s) will be matched, and only + to/from each endpoint's port. \n Services cannot be specified + on the same rule as Selector, NotSelector, NamespaceSelector, + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." + properties: + name: + description: Name specifies the name of a Kubernetes + Service to match. + type: string + namespace: + description: Namespace specifies the namespace of the + given Service. If left empty, the rule will match + within this policy's namespace. + type: string + type: object + type: object + http: + description: HTTP contains match criteria that apply to HTTP + requests. + properties: + methods: + description: Methods is an optional field that restricts + the rule to apply only to HTTP requests that use one of + the listed HTTP Methods (e.g. GET, PUT, etc.) Multiple + methods are OR'd together. + items: + type: string + type: array + paths: + description: 'Paths is an optional field that restricts + the rule to apply to HTTP requests that use one of the + listed HTTP Paths. Multiple paths are OR''d together. + e.g: - exact: /foo - prefix: /bar NOTE: Each entry may + ONLY specify either a `exact` or a `prefix` match. The + validator will check for it.' + items: + description: 'HTTPPath specifies an HTTP path to match. + It may be either of the form: exact: : which matches + the path exactly or prefix: : which matches + the path prefix' + properties: + exact: + type: string + prefix: + type: string + type: object + type: array + type: object + icmp: + description: ICMP is an optional field that restricts the rule + to apply to a specific type and code of ICMP traffic. This + should only be specified if the Protocol field is set to "ICMP" + or "ICMPv6". + properties: + code: + description: Match on a specific ICMP code. If specified, + the Type value must also be specified. This is a technical + limitation imposed by the kernel's iptables firewall, + which Calico uses to enforce the rule. + type: integer + type: + description: Match on a specific ICMP type. For example + a value of 8 refers to ICMP Echo Request (i.e. pings). + type: integer + type: object + ipVersion: + description: IPVersion is an optional field that restricts the + rule to only match a specific IP version. + type: integer + metadata: + description: Metadata contains additional information for this + rule + properties: + annotations: + additionalProperties: + type: string + description: Annotations is a set of key value pairs that + give extra information about the rule + type: object + type: object + notICMP: + description: NotICMP is the negated version of the ICMP field. + properties: + code: + description: Match on a specific ICMP code. If specified, + the Type value must also be specified. This is a technical + limitation imposed by the kernel's iptables firewall, + which Calico uses to enforce the rule. + type: integer + type: + description: Match on a specific ICMP type. For example + a value of 8 refers to ICMP Echo Request (i.e. pings). + type: integer + type: object + notProtocol: + anyOf: + - type: integer + - type: string + description: NotProtocol is the negated version of the Protocol + field. + pattern: ^.* + x-kubernetes-int-or-string: true + protocol: + anyOf: + - type: integer + - type: string + description: "Protocol is an optional field that restricts the + rule to only apply to traffic of a specific IP protocol. Required + if any of the EntityRules contain Ports (because ports only + apply to certain protocols). \n Must be one of these string + values: \"TCP\", \"UDP\", \"ICMP\", \"ICMPv6\", \"SCTP\", + \"UDPLite\" or an integer in the range 1-255." + pattern: ^.* + x-kubernetes-int-or-string: true + source: + description: Source contains the match criteria that apply to + source entity. + properties: + namespaceSelector: + description: "NamespaceSelector is an optional field that + contains a selector expression. Only traffic that originates + from (or terminates at) endpoints within the selected + namespaces will be matched. When both NamespaceSelector + and another selector are defined on the same rule, then + only workload endpoints that are matched by both selectors + will be selected by the rule. \n For NetworkPolicy, an + empty NamespaceSelector implies that the Selector is limited + to selecting only workload endpoints in the same namespace + as the NetworkPolicy. \n For NetworkPolicy, `global()` + NamespaceSelector implies that the Selector is limited + to selecting only GlobalNetworkSet or HostEndpoint. \n + For GlobalNetworkPolicy, an empty NamespaceSelector implies + the Selector applies to workload endpoints across all + namespaces." + type: string + nets: + description: Nets is an optional field that restricts the + rule to only apply to traffic that originates from (or + terminates at) IP addresses in any of the given subnets. + items: + type: string + type: array + notNets: + description: NotNets is the negated version of the Nets + field. + items: + type: string + type: array + notPorts: + description: NotPorts is the negated version of the Ports + field. Since only some protocols have ports, if any ports + are specified it requires the Protocol match in the Rule + to be set to "TCP" or "UDP". + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + notSelector: + description: NotSelector is the negated version of the Selector + field. See Selector field for subtleties with negated + selectors. + type: string + ports: + description: "Ports is an optional field that restricts + the rule to only apply to traffic that has a source (destination) + port that matches one of these ranges/values. This value + is a list of integers or strings that represent ranges + of ports. \n Since only some protocols have ports, if + any ports are specified it requires the Protocol match + in the Rule to be set to \"TCP\" or \"UDP\"." + items: + anyOf: + - type: integer + - type: string + pattern: ^.* + x-kubernetes-int-or-string: true + type: array + selector: + description: "Selector is an optional field that contains + a selector expression (see Policy for sample syntax). + \ Only traffic that originates from (terminates at) endpoints + matching the selector will be matched. \n Note that: in + addition to the negated version of the Selector (see NotSelector + below), the selector expression syntax itself supports + negation. The two types of negation are subtly different. + One negates the set of matched endpoints, the other negates + the whole match: \n \tSelector = \"!has(my_label)\" matches + packets that are from other Calico-controlled \tendpoints + that do not have the label \"my_label\". \n \tNotSelector + = \"has(my_label)\" matches packets that are not from + Calico-controlled \tendpoints that do have the label \"my_label\". + \n The effect is that the latter will accept packets from + non-Calico sources whereas the former is limited to packets + from Calico-controlled endpoints." + type: string + serviceAccounts: + description: ServiceAccounts is an optional field that restricts + the rule to only apply to traffic that originates from + (or terminates at) a pod running as a matching service + account. + properties: + names: + description: Names is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account whose name is in the list. + items: + type: string + type: array + selector: + description: Selector is an optional field that restricts + the rule to only apply to traffic that originates + from (or terminates at) a pod running as a service + account that matches the given label selector. If + both Names and Selector are specified then they are + AND'ed. + type: string + type: object + services: + description: "Services is an optional field that contains + options for matching Kubernetes Services. If specified, + only traffic that originates from or terminates at endpoints + within the selected service(s) will be matched, and only + to/from each endpoint's port. \n Services cannot be specified + on the same rule as Selector, NotSelector, NamespaceSelector, + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." + properties: + name: + description: Name specifies the name of a Kubernetes + Service to match. + type: string + namespace: + description: Namespace specifies the namespace of the + given Service. If left empty, the rule will match + within this policy's namespace. + type: string + type: object + type: object + required: + - action + type: object + type: array + order: + description: Order is an optional field that specifies the order in + which the policy is applied. Policies with higher "order" are applied + after those with lower order. If the order is omitted, it may be + considered to be "infinite" - i.e. the policy will be applied last. Policies + with identical order will be applied in alphanumerical order based + on the Policy "Name". + type: number + selector: + description: "The selector is an expression used to pick pick out + the endpoints that the policy should be applied to. \n Selector + expressions follow this syntax: \n \tlabel == \"string_literal\" + \ -> comparison, e.g. my_label == \"foo bar\" \tlabel != \"string_literal\" + \ -> not equal; also matches if label is not present \tlabel in + { \"a\", \"b\", \"c\", ... } -> true if the value of label X is + one of \"a\", \"b\", \"c\" \tlabel not in { \"a\", \"b\", \"c\", + ... } -> true if the value of label X is not one of \"a\", \"b\", + \"c\" \thas(label_name) -> True if that label is present \t! expr + -> negation of expr \texpr && expr -> Short-circuit and \texpr + || expr -> Short-circuit or \t( expr ) -> parens for grouping \tall() + or the empty selector -> matches all endpoints. \n Label names are + allowed to contain alphanumerics, -, _ and /. String literals are + more permissive but they do not support escape characters. \n Examples + (with made-up labels): \n \ttype == \"webserver\" && deployment + == \"prod\" \ttype in {\"frontend\", \"backend\"} \tdeployment != + \"dev\" \t! has(label_name)" + type: string + serviceAccountSelector: + description: ServiceAccountSelector is an optional field for an expression + used to select a pod based on service accounts. + type: string + types: + description: "Types indicates whether this policy applies to ingress, + or to egress, or to both. When not explicitly specified (and so + the value on creation is empty or nil), Calico defaults Types according + to what Ingress and Egress are present in the policy. The default + is: \n - [ PolicyTypeIngress ], if there are no Egress rules (including + the case where there are also no Ingress rules) \n - [ PolicyTypeEgress + ], if there are Egress rules but no Ingress rules \n - [ PolicyTypeIngress, + PolicyTypeEgress ], if there are both Ingress and Egress rules. + \n When the policy is read back again, Types will always be one + of these values, never empty or nil." + items: + description: PolicyType enumerates the possible values of the PolicySpec + Types field. + type: string + type: array + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: networksets.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: NetworkSet + listKind: NetworkSetList + plural: networksets + singular: networkset + scope: Namespaced + versions: + - name: v1 + schema: + openAPIV3Schema: + description: NetworkSet is the Namespaced-equivalent of the GlobalNetworkSet. + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: NetworkSetSpec contains the specification for a NetworkSet + resource. + properties: + nets: + description: The list of IP networks that belong to this set. + items: + type: string + type: array + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +--- +# Source: calico/templates/calico-kube-controllers-rbac.yaml + +# Include a clusterrole for the kube-controllers component, +# and bind it to the calico-kube-controllers serviceaccount. +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: calico-kube-controllers +rules: + # Nodes are watched to monitor for deletions. + - apiGroups: [""] + resources: + - nodes + verbs: + - watch + - list + - get + # Pods are watched to check for existence as part of IPAM controller. + - apiGroups: [""] + resources: + - pods + verbs: + - get + - list + - watch + # IPAM resources are manipulated in response to node and block updates, as well as periodic triggers. + - apiGroups: ["crd.projectcalico.org"] + resources: + - ipreservations + verbs: + - list + - apiGroups: ["crd.projectcalico.org"] + resources: + - blockaffinities + - ipamblocks + - ipamhandles + verbs: + - get + - list + - create + - update + - delete + - watch + # Pools are watched to maintain a mapping of blocks to IP pools. + - apiGroups: ["crd.projectcalico.org"] + resources: + - ippools + verbs: + - list + - watch + # kube-controllers manages hostendpoints. + - apiGroups: ["crd.projectcalico.org"] + resources: + - hostendpoints + verbs: + - get + - list + - create + - update + - delete + # Needs access to update clusterinformations. + - apiGroups: ["crd.projectcalico.org"] + resources: + - clusterinformations + verbs: + - get + - list + - create + - update + - watch + # KubeControllersConfiguration is where it gets its config + - apiGroups: ["crd.projectcalico.org"] + resources: + - kubecontrollersconfigurations + verbs: + # read its own config + - get + # create a default if none exists + - create + # update status + - update + # watch for changes + - watch +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: calico-kube-controllers +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: calico-kube-controllers +subjects: +- kind: ServiceAccount + name: calico-kube-controllers + namespace: kube-system +--- + +--- +# Source: calico/templates/calico-node-rbac.yaml +# Include a clusterrole for the calico-node DaemonSet, +# and bind it to the calico-node serviceaccount. +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: calico-node +rules: + # Used for creating service account tokens to be used by the CNI plugin + - apiGroups: [""] + resources: + - serviceaccounts/token + resourceNames: + - calico-node + verbs: + - create + # The CNI plugin needs to get pods, nodes, and namespaces. + - apiGroups: [""] + resources: + - pods + - nodes + - namespaces + verbs: + - get + # EndpointSlices are used for Service-based network policy rule + # enforcement. + - apiGroups: ["discovery.k8s.io"] + resources: + - endpointslices + verbs: + - watch + - list + - apiGroups: [""] + resources: + - endpoints + - services + verbs: + # Used to discover service IPs for advertisement. + - watch + - list + # Used to discover Typhas. + - get + # Pod CIDR auto-detection on kubeadm needs access to config maps. + - apiGroups: [""] + resources: + - configmaps + verbs: + - get + - apiGroups: [""] + resources: + - nodes/status + verbs: + # Needed for clearing NodeNetworkUnavailable flag. + - patch + # Calico stores some configuration information in node annotations. + - update + # Watch for changes to Kubernetes NetworkPolicies. + - apiGroups: ["networking.k8s.io"] + resources: + - networkpolicies + verbs: + - watch + - list + # Used by Calico for policy information. + - apiGroups: [""] + resources: + - pods + - namespaces + - serviceaccounts + verbs: + - list + - watch + # The CNI plugin patches pods/status. + - apiGroups: [""] + resources: + - pods/status + verbs: + - patch + # Calico monitors various CRDs for config. + - apiGroups: ["crd.projectcalico.org"] + resources: + - globalfelixconfigs + - felixconfigurations + - bgppeers + - globalbgpconfigs + - bgpconfigurations + - ippools + - ipreservations + - ipamblocks + - globalnetworkpolicies + - globalnetworksets + - networkpolicies + - networksets + - clusterinformations + - hostendpoints + - blockaffinities + - caliconodestatuses + verbs: + - get + - list + - watch + # Calico must create and update some CRDs on startup. + - apiGroups: ["crd.projectcalico.org"] + resources: + - ippools + - felixconfigurations + - clusterinformations + verbs: + - create + - update + # Calico must update some CRDs. + - apiGroups: [ "crd.projectcalico.org" ] + resources: + - caliconodestatuses + verbs: + - update + # Calico stores some configuration information on the node. + - apiGroups: [""] + resources: + - nodes + verbs: + - get + - list + - watch + # These permissions are only required for upgrade from v2.6, and can + # be removed after upgrade or on fresh installations. + - apiGroups: ["crd.projectcalico.org"] + resources: + - bgpconfigurations + - bgppeers + verbs: + - create + - update + # These permissions are required for Calico CNI to perform IPAM allocations. + - apiGroups: ["crd.projectcalico.org"] + resources: + - blockaffinities + - ipamblocks + - ipamhandles + verbs: + - get + - list + - create + - update + - delete + - apiGroups: ["crd.projectcalico.org"] + resources: + - ipamconfigs + verbs: + - get + # Block affinities must also be watchable by confd for route aggregation. + - apiGroups: ["crd.projectcalico.org"] + resources: + - blockaffinities + verbs: + - watch + # The Calico IPAM migration needs to get daemonsets. These permissions can be + # removed if not upgrading from an installation using host-local IPAM. + - apiGroups: ["apps"] + resources: + - daemonsets + verbs: + - get + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: calico-node +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: calico-node +subjects: +- kind: ServiceAccount + name: calico-node + namespace: kube-system + +--- +# Source: calico/templates/calico-node.yaml +# This manifest installs the calico-node container, as well +# as the CNI plugins and network config on +# each master and worker node in a Kubernetes cluster. +kind: DaemonSet +apiVersion: apps/v1 +metadata: + name: calico-node + namespace: kube-system + labels: + k8s-app: calico-node +spec: + selector: + matchLabels: + k8s-app: calico-node + updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + template: + metadata: + labels: + k8s-app: calico-node + spec: + nodeSelector: + kubernetes.io/os: linux + hostNetwork: true + tolerations: + # Make sure calico-node gets scheduled on all nodes. + - effect: NoSchedule + operator: Exists + # Mark the pod as a critical add-on for rescheduling. + - key: CriticalAddonsOnly + operator: Exists + - effect: NoExecute + operator: Exists + serviceAccountName: calico-node + # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force + # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. + terminationGracePeriodSeconds: 0 + priorityClassName: system-node-critical + initContainers: + # This container performs upgrade from host-local IPAM to calico-ipam. + # It can be deleted if this is a fresh installation, or if you have already + # upgraded to use calico-ipam. + - name: upgrade-ipam + image: docker.io/calico/cni:v3.23.3 + command: ["/opt/cni/bin/calico-ipam", "-upgrade"] + envFrom: + - configMapRef: + # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode. + name: kubernetes-services-endpoint + optional: true + env: + - name: KUBERNETES_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: CALICO_NETWORKING_BACKEND + valueFrom: + configMapKeyRef: + name: calico-config + key: calico_backend + volumeMounts: + - mountPath: /var/lib/cni/networks + name: host-local-net-dir + - mountPath: /host/opt/cni/bin + name: cni-bin-dir + securityContext: + privileged: true + # This container installs the CNI binaries + # and CNI network config file on each node. + - name: install-cni + image: docker.io/calico/cni:v3.23.3 + command: ["/opt/cni/bin/install"] + envFrom: + - configMapRef: + # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode. + name: kubernetes-services-endpoint + optional: true + env: + # Name of the CNI config file to create. + - name: CNI_CONF_NAME + value: "10-calico.conflist" + # The CNI network config to install on each node. + - name: CNI_NETWORK_CONFIG + valueFrom: + configMapKeyRef: + name: calico-config + key: cni_network_config + # Set the hostname based on the k8s node name. + - name: KUBERNETES_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + # CNI MTU Config variable + - name: CNI_MTU + valueFrom: + configMapKeyRef: + name: calico-config + key: veth_mtu + # Prevents the container from sleeping forever. + - name: SLEEP + value: "false" + volumeMounts: + - mountPath: /host/opt/cni/bin + name: cni-bin-dir + - mountPath: /host/etc/cni/net.d + name: cni-net-dir + securityContext: + privileged: true + # This init container mounts the necessary filesystems needed by the BPF data plane + # i.e. bpf at /sys/fs/bpf and cgroup2 at /run/calico/cgroup. Calico-node initialisation is executed + # in best effort fashion, i.e. no failure for errors, to not disrupt pod creation in iptable mode. + - name: "mount-bpffs" + image: docker.io/calico/node:v3.23.3 + command: ["calico-node", "-init", "-best-effort"] + volumeMounts: + - mountPath: /sys/fs + name: sys-fs + # Bidirectional is required to ensure that the new mount we make at /sys/fs/bpf propagates to the host + # so that it outlives the init container. + mountPropagation: Bidirectional + - mountPath: /var/run/calico + name: var-run-calico + # Bidirectional is required to ensure that the new mount we make at /run/calico/cgroup propagates to the host + # so that it outlives the init container. + mountPropagation: Bidirectional + # Mount /proc/ from host which usually is an init program at /nodeproc. It's needed by mountns binary, + # executed by calico-node, to mount root cgroup2 fs at /run/calico/cgroup to attach CTLB programs correctly. + - mountPath: /nodeproc + name: nodeproc + readOnly: true + securityContext: + privileged: true + containers: + # Runs calico-node container on each Kubernetes node. This + # container programs network policy and routes on each + # host. + - name: calico-node + image: docker.io/calico/node:v3.23.3 + envFrom: + - configMapRef: + # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode. + name: kubernetes-services-endpoint + optional: true + env: + # Use Kubernetes API as the backing datastore. + - name: DATASTORE_TYPE + value: "kubernetes" + # Wait for the datastore. + - name: WAIT_FOR_DATASTORE + value: "true" + # Set based on the k8s node name. + - name: NODENAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + # Choose the backend to use. + - name: CALICO_NETWORKING_BACKEND + valueFrom: + configMapKeyRef: + name: calico-config + key: calico_backend + # Cluster type to identify the deployment type + - name: CLUSTER_TYPE + value: "k8s,bgp" + # Auto-detect the BGP IP address. + - name: IP + value: "autodetect" + # Enable IPIP + - name: CALICO_IPV4POOL_IPIP + value: "Always" + # Enable or Disable VXLAN on the default IP pool. + - name: CALICO_IPV4POOL_VXLAN + value: "Never" + # Enable or Disable VXLAN on the default IPv6 IP pool. + - name: CALICO_IPV6POOL_VXLAN + value: "Never" + # Set MTU for tunnel device used if ipip is enabled + - name: FELIX_IPINIPMTU + valueFrom: + configMapKeyRef: + name: calico-config + key: veth_mtu + # Set MTU for the VXLAN tunnel device. + - name: FELIX_VXLANMTU + valueFrom: + configMapKeyRef: + name: calico-config + key: veth_mtu + # Set MTU for the Wireguard tunnel device. + - name: FELIX_WIREGUARDMTU + valueFrom: + configMapKeyRef: + name: calico-config + key: veth_mtu + # The default IPv4 pool to create on startup if none exists. Pod IPs will be + # chosen from this range. Changing this value after installation will have + # no effect. This should fall within `--cluster-cidr`. + # - name: CALICO_IPV4POOL_CIDR + # value: "192.168.0.0/16" + # Disable file logging so `kubectl logs` works. + - name: CALICO_DISABLE_FILE_LOGGING + value: "true" + # Set Felix endpoint to host default action to ACCEPT. + - name: FELIX_DEFAULTENDPOINTTOHOSTACTION + value: "ACCEPT" + # Disable IPv6 on Kubernetes. + - name: FELIX_IPV6SUPPORT + value: "false" + - name: FELIX_HEALTHENABLED + value: "true" + securityContext: + privileged: true + resources: + requests: + cpu: 250m + lifecycle: + preStop: + exec: + command: + - /bin/calico-node + - -shutdown + livenessProbe: + exec: + command: + - /bin/calico-node + - -felix-live + - -bird-live + periodSeconds: 10 + initialDelaySeconds: 10 + failureThreshold: 6 + timeoutSeconds: 10 + readinessProbe: + exec: + command: + - /bin/calico-node + - -felix-ready + - -bird-ready + periodSeconds: 10 + timeoutSeconds: 10 + volumeMounts: + # For maintaining CNI plugin API credentials. + - mountPath: /host/etc/cni/net.d + name: cni-net-dir + readOnly: false + - mountPath: /lib/modules + name: lib-modules + readOnly: true + - mountPath: /run/xtables.lock + name: xtables-lock + readOnly: false + - mountPath: /var/run/calico + name: var-run-calico + readOnly: false + - mountPath: /var/lib/calico + name: var-lib-calico + readOnly: false + - name: policysync + mountPath: /var/run/nodeagent + # For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the + # parent directory. + - name: bpffs + mountPath: /sys/fs/bpf + - name: cni-log-dir + mountPath: /var/log/calico/cni + readOnly: true + volumes: + # Used by calico-node. + - name: lib-modules + hostPath: + path: /lib/modules + - name: var-run-calico + hostPath: + path: /var/run/calico + - name: var-lib-calico + hostPath: + path: /var/lib/calico + - name: xtables-lock + hostPath: + path: /run/xtables.lock + type: FileOrCreate + - name: sys-fs + hostPath: + path: /sys/fs/ + type: DirectoryOrCreate + - name: bpffs + hostPath: + path: /sys/fs/bpf + type: Directory + # mount /proc at /nodeproc to be used by mount-bpffs initContainer to mount root cgroup2 fs. + - name: nodeproc + hostPath: + path: /proc + # Used to install CNI. + - name: cni-bin-dir + hostPath: + path: /opt/cni/bin + - name: cni-net-dir + hostPath: + path: /etc/cni/net.d + # Used to access CNI logs. + - name: cni-log-dir + hostPath: + path: /var/log/calico/cni + # Mount in the directory for host-local IPAM allocations. This is + # used when upgrading from host-local to calico-ipam, and can be removed + # if not using the upgrade-ipam init container. + - name: host-local-net-dir + hostPath: + path: /var/lib/cni/networks + # Used to create per-pod Unix Domain Sockets + - name: policysync + hostPath: + type: DirectoryOrCreate + path: /var/run/nodeagent +--- + +apiVersion: v1 +kind: ServiceAccount +metadata: + name: calico-node + namespace: kube-system + +--- +# Source: calico/templates/calico-kube-controllers.yaml +# See https://github.com/projectcalico/kube-controllers +apiVersion: apps/v1 +kind: Deployment +metadata: + name: calico-kube-controllers + namespace: kube-system + labels: + k8s-app: calico-kube-controllers +spec: + # The controllers can only have a single active instance. + replicas: 1 + selector: + matchLabels: + k8s-app: calico-kube-controllers + strategy: + type: Recreate + template: + metadata: + name: calico-kube-controllers + namespace: kube-system + labels: + k8s-app: calico-kube-controllers + spec: + nodeSelector: + kubernetes.io/os: linux + tolerations: + # Mark the pod as a critical add-on for rescheduling. + - key: CriticalAddonsOnly + operator: Exists + - key: node-role.kubernetes.io/master + effect: NoSchedule + serviceAccountName: calico-kube-controllers + priorityClassName: system-cluster-critical + containers: + - name: calico-kube-controllers + image: docker.io/calico/kube-controllers:v3.23.3 + env: + # Choose which controllers to run. + - name: ENABLED_CONTROLLERS + value: node + - name: DATASTORE_TYPE + value: kubernetes + livenessProbe: + exec: + command: + - /usr/bin/check-status + - -l + periodSeconds: 10 + initialDelaySeconds: 10 + failureThreshold: 6 + timeoutSeconds: 10 + readinessProbe: + exec: + command: + - /usr/bin/check-status + - -r + periodSeconds: 10 + +--- + +apiVersion: v1 +kind: ServiceAccount +metadata: + name: calico-kube-controllers + namespace: kube-system + +--- + +# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict + +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: calico-kube-controllers + namespace: kube-system + labels: + k8s-app: calico-kube-controllers +spec: + maxUnavailable: 1 + selector: + matchLabels: + k8s-app: calico-kube-controllers + +--- +# Source: calico/templates/calico-etcd-secrets.yaml + +--- +# Source: calico/templates/calico-typha.yaml + +--- +# Source: calico/templates/configure-canal.yaml + + diff --git a/telemetry-aware-scheduling/deploy/cluster-api/shared/clusterresourcesets.yaml b/telemetry-aware-scheduling/deploy/cluster-api/shared/clusterresourcesets.yaml new file mode 100644 index 00000000..7b39ac6b --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/shared/clusterresourcesets.yaml @@ -0,0 +1,95 @@ +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: prometheus +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: prometheus-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: prometheus-node-exporter +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: prometheus-node-exporter-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: custom-metrics +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: custom-metrics-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: custom-metrics-tls-secret +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: custom-metrics-tls-secret-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: tas +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: tas-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: tas-tls-secret +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: tas-tls-secret-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: extender +spec: + clusterSelector: + matchLabels: + scheduler: tas + resources: + - kind: ConfigMap + name: extender-configmap +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: calico +spec: + clusterSelector: + matchLabels: + cni: calico + resources: + - kind: ConfigMap + name: calico-configmap From fd030d41e89a5afb21b76f09ad6a724b01c120f8 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 11:37:05 +0100 Subject: [PATCH 07/21] Update link to Health Metric Example. --- telemetry-aware-scheduling/deploy/cluster-api/README.md | 2 +- .../deploy/cluster-api/docker/capi-docker.md | 2 +- telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/README.md b/telemetry-aware-scheduling/deploy/cluster-api/README.md index c9f7d267..c704d172 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/README.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/README.md @@ -14,4 +14,4 @@ This folder contains an automated and declarative way of deploying the Telemetry ## Testing You can test if the scheduler actually works by following this guide: -[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/25a646ece15aaf4c549d8152c4ffbbfc61f8a009/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file +[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/master/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 2c4bab7d..09cfdc02 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -155,4 +155,4 @@ sed -i -e "s/server:.*/server: https:\/\/$(docker port ecoqube-dev-lb 6443/tcp | ``` You can test if the scheduler actually works by following this guide: -[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/25a646ece15aaf4c549d8152c4ffbbfc61f8a009/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file +[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/master/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index 0ebf1b14..b82d730c 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -127,4 +127,4 @@ Finally, you can apply your manifests `kubectl apply -f your-manifests.yaml`. The Telemetry Aware Scheduler will be running on your new cluster. You can test if the scheduler actually works by following this guide: -[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/25a646ece15aaf4c549d8152c4ffbbfc61f8a009/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file +[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/master/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file From 3badb057cfd7867d32bb182b82214db4db815bc2 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 11:40:57 +0100 Subject: [PATCH 08/21] Rename your-manifests.yaml to capi-quickstart.yaml --- .../deploy/cluster-api/docker/capi-docker.md | 7 ++----- .../deploy/cluster-api/generic/capi.md | 9 +++------ 2 files changed, 5 insertions(+), 11 deletions(-) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 09cfdc02..79f25424 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -50,7 +50,7 @@ clusterctl generate cluster capi-quickstart --flavor development \ ``` 2. Merge the contents of the resources provided in `cluster-patch.yaml`, `kubeadmcontrolplanetemplate-patch.yaml` and `clusterclass-patch.yaml` with - `your-manifests.yaml`. + `capi-quickstart.yaml`. The new config will: - Configure TLS certificates for the extender @@ -59,9 +59,6 @@ The new config will: - Change the behavior of the pre-existing patch application of `/spec/template/spec/kubeadmConfigSpec/files` in `ClusterClass` such that our new patch is not ignored/overwritten. For some more clarification on this, see [this issue](https://github.com/kubernetes-sigs/cluster-api/pull/7630). -You will also need to add a label to the `Cluster` resource of your new cluster to allow ClusterResourceSets to target -it (see `cluster-patch.yaml`). Simply add a label `scheduler: tas` in your `Cluster` resource present in `your-manifests.yaml`. - 3. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g. @@ -140,7 +137,7 @@ Apply them to the management cluster with `kubectl apply -f ./shared/clusterreso 6. Apply the cluster manifests -Finally, you can apply your manifests `kubectl apply -f your-manifests.yaml`. +Finally, you can apply your manifests `kubectl apply -f capi-quickstart.yaml`. The Telemetry Aware Scheduler will be running on your new cluster. You can connect to the workload cluster by exporting its kubeconfig: diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index b82d730c..ba8293df 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -18,7 +18,7 @@ clusterctl generate cluster scheduling-dev-wkld \ --kubernetes-version v1.25.0 \ --control-plane-machine-count=1 \ --worker-machine-count=3 \ - > your-manifests.yaml + > capi-quickstart.yaml ``` Be aware that you will need to install a CNI such as Calico before the cluster will be usable. @@ -26,7 +26,7 @@ Calico works for the great majority of providers, so all configurations have bee For more information, see [Deploy a CNI solution](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution) in the CAPI quickstart. 2. Merge the contents of the resources provided in `cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with - `your-manifests.yaml`. + `capi-quickstart.yaml`. If you move `KubeadmControlPlane` in its own file, you can use the convenient `yq` utility: @@ -42,9 +42,6 @@ The new config will: - Change the `dnsPolicy` of the scheduler to `ClusterFirstWithHostNet` - Place `KubeSchedulerConfiguration` into control plane nodes and pass the relative CLI flag to the scheduler. -You will also need to add a label to the `Cluster` resource of your new cluster to allow ClusterResourceSets to target -it (see `cluster-patch.yaml`). Simply add a label `scheduler: tas` in your `Cluster` resource present in `your-manifests.yaml`. - 3. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g. @@ -123,7 +120,7 @@ Apply them to the management cluster with `kubectl apply -f ./shared/clusterreso 6. Apply the cluster manifests -Finally, you can apply your manifests `kubectl apply -f your-manifests.yaml`. +Finally, you can apply your manifests `kubectl apply -f capi-quickstart.yaml`. The Telemetry Aware Scheduler will be running on your new cluster. You can test if the scheduler actually works by following this guide: From 57ff01420bad41a7e9da52f4c5e798fef47e193e Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 11:43:40 +0100 Subject: [PATCH 09/21] Fix numbering in markdown. --- .../deploy/cluster-api/docker/capi-docker.md | 42 +++++++++---------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 79f25424..4255659c 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -49,7 +49,7 @@ clusterctl generate cluster capi-quickstart --flavor development \ > capi-quickstart.yaml ``` -2. Merge the contents of the resources provided in `cluster-patch.yaml`, `kubeadmcontrolplanetemplate-patch.yaml` and `clusterclass-patch.yaml` with +4. Merge the contents of the resources provided in `cluster-patch.yaml`, `kubeadmcontrolplanetemplate-patch.yaml` and `clusterclass-patch.yaml` with `capi-quickstart.yaml`. The new config will: @@ -59,37 +59,37 @@ The new config will: - Change the behavior of the pre-existing patch application of `/spec/template/spec/kubeadmConfigSpec/files` in `ClusterClass` such that our new patch is not ignored/overwritten. For some more clarification on this, see [this issue](https://github.com/kubernetes-sigs/cluster-api/pull/7630). -3. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: +5. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g. additional metric scraping configurations), then render the charts: - ```bash - helm template ../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml - helm template ../charts/prometheus_helm_chart/ > prometheus.yaml - helm template ../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml - ``` +```bash +helm template ../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml +helm template ../charts/prometheus_helm_chart/ > prometheus.yaml +helm template ../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml +``` You need to add namespaces resources, else resource application will fail. Prepend the following to `prometheus.yaml`: - ```bash +```bash kind: Namespace apiVersion: v1 metadata: name: monitoring labels: name: monitoring - ```` +```` Prepend the following to `prometheus-custom-metrics.yaml`: - ```bash +```bash kind: Namespace apiVersion: v1 metadata: name: custom-metrics labels: name: custom-metrics - ``` +``` The custom metrics adapter and the TAS deployment require TLS to be configured with a certificate and key. Information on how to generate correctly signed certs in kubernetes can be found [here](https://github.com/kubernetes-sigs/apiserver-builder-alpha/blob/master/docs/concepts/auth.md). @@ -97,21 +97,21 @@ Files ``serving-ca.crt`` and ``serving-ca.key`` should be in the current working Run the following: - ```bash - kubectl -n custom-metrics create secret tls cm-adapter-serving-certs --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > custom-metrics-tls-secret.yaml - kubectl -n default create secret tls extender-secret --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > tas-tls-secret.yaml - ``` +```bash +kubectl -n custom-metrics create secret tls cm-adapter-serving-certs --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > custom-metrics-tls-secret.yaml +kubectl -n default create secret tls extender-secret --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > tas-tls-secret.yaml +``` **Attention: Don't commit the TLS certificate and private key to any Git repo as it is considered bad security practice! Make sure to wipe them off your workstation after applying the relative Secrets to your cluster.** You also need the TAS manifests (Deployment, Policy CRD and RBAC accounts) and the extender's "configmapgetter" ClusterRole. We will join the TAS manifests together, so we can have a single ConfigMap for convenience: - ```bash - yq '.' ../tas-*.yaml > tas.yaml - ``` +```bash +yq '.' ../tas-*.yaml > tas.yaml +``` -4. Create and apply the ConfigMaps +6. Create and apply the ConfigMaps ```bash kubectl create configmap custom-metrics-tls-secret-configmap --from-file=./custom-metrics-tls-secret.yaml -o yaml --dry-run=client > custom-metrics-tls-secret-configmap.yaml @@ -130,12 +130,12 @@ Apply to the management cluster: kubectl apply -f '*-configmap.yaml' ``` -5. Apply the ClusterResourceSets +7. Apply the ClusterResourceSets ClusterResourceSets resources are already given to you in `../shared/clusterresourcesets.yaml`. Apply them to the management cluster with `kubectl apply -f ./shared/clusterresourcesets.yaml` -6. Apply the cluster manifests +8. Apply the cluster manifests Finally, you can apply your manifests `kubectl apply -f capi-quickstart.yaml`. The Telemetry Aware Scheduler will be running on your new cluster. You can connect to the workload cluster by From 1bb8999bc709f37b82f2a161cbeb5a7c2fd9f3a5 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 11:50:22 +0100 Subject: [PATCH 10/21] Add yaml newlines. Move cluster-patch.yaml to shared folder and rename docs. --- .../deploy/cluster-api/docker/capi-docker.md | 2 +- .../deploy/cluster-api/docker/cluster-patch.yaml | 3 ++- .../deploy/cluster-api/docker/clusterclass-patch.yaml | 2 +- .../docker/kubeadmcontrolplanetemplate-patch.yaml | 2 +- .../deploy/cluster-api/generic/capi.md | 2 +- .../deploy/cluster-api/generic/cluster-patch.yaml | 3 ++- .../cluster-api/generic/kubeadmcontrolplane-patch.yaml | 2 +- .../deploy/cluster-api/shared/cluster-patch.yaml | 6 ++++++ 8 files changed, 15 insertions(+), 7 deletions(-) create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/shared/cluster-patch.yaml diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 4255659c..117db49c 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -49,7 +49,7 @@ clusterctl generate cluster capi-quickstart --flavor development \ > capi-quickstart.yaml ``` -4. Merge the contents of the resources provided in `cluster-patch.yaml`, `kubeadmcontrolplanetemplate-patch.yaml` and `clusterclass-patch.yaml` with +4. Merge the contents of the resources provided in `../shared/cluster-patch.yaml`, `kubeadmcontrolplanetemplate-patch.yaml` and `clusterclass-patch.yaml` with `capi-quickstart.yaml`. The new config will: diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/cluster-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/docker/cluster-patch.yaml index eb002606..94b1488a 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/cluster-patch.yaml +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/cluster-patch.yaml @@ -2,4 +2,5 @@ apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: labels: - scheduler: tas \ No newline at end of file + scheduler: tas + cni: calico diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml index da599c36..b82e8e4b 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml @@ -6,4 +6,4 @@ spec: - jsonPatches: - op: add # Note: we must add a dash - after files, as shown below. Else the patch application in KubeadmControlPlaneTemplate will fail! - path: /spec/template/spec/kubeadmConfigSpec/files/- \ No newline at end of file + path: /spec/template/spec/kubeadmConfigSpec/files/- diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/kubeadmcontrolplanetemplate-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/docker/kubeadmcontrolplanetemplate-patch.yaml index c8eec953..06bc5b65 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/kubeadmcontrolplanetemplate-patch.yaml +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/kubeadmcontrolplanetemplate-patch.yaml @@ -53,4 +53,4 @@ spec: "path": "/spec/dnsPolicy", "value": "ClusterFirstWithHostNet" } - ] \ No newline at end of file + ] diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index ba8293df..a3b68ff4 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -25,7 +25,7 @@ Be aware that you will need to install a CNI such as Calico before the cluster w Calico works for the great majority of providers, so all configurations have been provided for your convenience, i.e ClusterResourceSet, CRS label in Cluster and CRS ConfigMap). For more information, see [Deploy a CNI solution](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution) in the CAPI quickstart. -2. Merge the contents of the resources provided in `cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with +2. Merge the contents of the resources provided in `../shared/cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with `capi-quickstart.yaml`. If you move `KubeadmControlPlane` in its own file, you can use the convenient `yq` utility: diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/cluster-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/generic/cluster-patch.yaml index eb002606..94b1488a 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/cluster-patch.yaml +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/cluster-patch.yaml @@ -2,4 +2,5 @@ apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: labels: - scheduler: tas \ No newline at end of file + scheduler: tas + cni: calico diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/kubeadmcontrolplane-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/generic/kubeadmcontrolplane-patch.yaml index 3ce5e944..20f48581 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/kubeadmcontrolplane-patch.yaml +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/kubeadmcontrolplane-patch.yaml @@ -51,4 +51,4 @@ spec: directory: /tmp/kubeadm/patches joinConfiguration: patches: - directory: /tmp/kubeadm/patches \ No newline at end of file + directory: /tmp/kubeadm/patches diff --git a/telemetry-aware-scheduling/deploy/cluster-api/shared/cluster-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/shared/cluster-patch.yaml new file mode 100644 index 00000000..94b1488a --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/shared/cluster-patch.yaml @@ -0,0 +1,6 @@ +apiVersion: cluster.x-k8s.io/v1beta1 +kind: Cluster +metadata: + labels: + scheduler: tas + cni: calico From e41d190ad8e1c724f1f307e735d0691842a9e87e Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 12:08:27 +0100 Subject: [PATCH 11/21] Add testing/development notice in all markdowns. --- telemetry-aware-scheduling/deploy/cluster-api/README.md | 2 ++ .../deploy/cluster-api/docker/capi-docker.md | 2 ++ telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md | 2 ++ 3 files changed, 6 insertions(+) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/README.md b/telemetry-aware-scheduling/deploy/cluster-api/README.md index c704d172..7527055a 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/README.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/README.md @@ -1,5 +1,7 @@ # Cluster API deployment +** This guide is meant for local testing/development only, this is not meant for production usage.** + ## Introduction Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. [Learn more](https://cluster-api.sigs.k8s.io/introduction.html). diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 117db49c..14c78fb6 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -1,5 +1,7 @@ # Cluster API deployment - Docker provider (for local testing/development only) +** This guide is meant for local testing/development only, this is not meant for production usage.** + ## Requirements - A management cluster provisioned in your infrastructure of choice and the relative tooling. diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index a3b68ff4..a8c0294f 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -1,5 +1,7 @@ # Cluster API deployment - Generic provider +** This guide is meant for local testing/development only, this is not meant for production usage.** + ## Requirements - A management cluster provisioned in your infrastructure of choice and the relative tooling. From fb752e1b8e5e0f5b89f1eefbf5f2e59fb5a1dd2a Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 12:10:18 +0100 Subject: [PATCH 12/21] Move generic/docker provider links to top. --- .../deploy/cluster-api/docker/capi-docker.md | 6 +++--- .../deploy/cluster-api/generic/capi.md | 3 ++- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 14c78fb6..919cd30b 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -2,6 +2,8 @@ ** This guide is meant for local testing/development only, this is not meant for production usage.** +For the deployment using a generic provider, please refer to [Cluster API deployment - Generic provider](capi.md). + ## Requirements - A management cluster provisioned in your infrastructure of choice and the relative tooling. @@ -11,9 +13,7 @@ ## Provision clusters with TAS installed using Cluster API -We will provision a KinD cluster with the TAS installed using Cluster API. This guide is meant for local testing/development only. - -For the deployment using a generic provider, please refer to [Cluster API deployment - Generic provider](capi.md). +We will provision a KinD cluster with the TAS installed using Cluster API. 1. Run the following to set up a KinD cluster for CAPD: diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index a8c0294f..86a21a31 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -2,6 +2,8 @@ ** This guide is meant for local testing/development only, this is not meant for production usage.** +For the deployment using the Docker provider (local testing/development only), please refer to [Cluster API deployment - Generic provider](capi.md). + ## Requirements - A management cluster provisioned in your infrastructure of choice and the relative tooling. @@ -12,7 +14,6 @@ We will provision a generic cluster with the TAS installed using Cluster API. This was tested using a the GCP provider. -For the deployment using the Docker provider (local testing/development only), please refer to [Cluster API deployment - Generic provider](capi.md). 1. In your management cluster, with all your environment variables set to generate cluster definitions, run for example: ```bash From eaf3e7c4ee0df3b45dde8fb228e4264200958fb6 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 12:12:07 +0100 Subject: [PATCH 13/21] Add Docker and Kind versions. --- .../deploy/cluster-api/docker/capi-docker.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 919cd30b..37f87d04 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -9,7 +9,8 @@ For the deployment using a generic provider, please refer to [Cluster API deploy - A management cluster provisioned in your infrastructure of choice and the relative tooling. See [Cluster API Quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html). - Run Kubernetes v1.22 or greater (tested on Kubernetes v1.25). -- Docker +- Docker (tested on Docker version 20.10.22) +- Kind (tested on Kind version 0.17.0) ## Provision clusters with TAS installed using Cluster API From 2fe40df9bcdd9a21859744ea4ec3e01ab4c86d49 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 12:21:19 +0100 Subject: [PATCH 14/21] Add small comment after clusterctl generate. --- .../deploy/cluster-api/docker/capi-docker.md | 2 ++ telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md | 4 +++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 37f87d04..d15102bf 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -52,6 +52,8 @@ clusterctl generate cluster capi-quickstart --flavor development \ > capi-quickstart.yaml ``` +If Kind was running correctly, and the Docker provider was initialized with the previous command, the command will return nothing to indicate success. + 4. Merge the contents of the resources provided in `../shared/cluster-patch.yaml`, `kubeadmcontrolplanetemplate-patch.yaml` and `clusterclass-patch.yaml` with `capi-quickstart.yaml`. diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index 86a21a31..bac67ca4 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -24,8 +24,10 @@ clusterctl generate cluster scheduling-dev-wkld \ > capi-quickstart.yaml ``` +If Kind was running correctly, and the Docker provider was initialized with the previous command, the command will return nothing to indicate success. + Be aware that you will need to install a CNI such as Calico before the cluster will be usable. -Calico works for the great majority of providers, so all configurations have been provided for your convenience, i.e ClusterResourceSet, CRS label in Cluster and CRS ConfigMap). +Calico works for the great majority of providers, so all configurations have been provided for your convenience, i.e. ClusterResourceSet, CRS label in Cluster and CRS ConfigMap). For more information, see [Deploy a CNI solution](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution) in the CAPI quickstart. 2. Merge the contents of the resources provided in `../shared/cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with From 1365e5920e04e020b660eadc67ff82963187d04d Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 12:42:09 +0100 Subject: [PATCH 15/21] Add necessary feature flags. --- .../deploy/cluster-api/docker/capi-docker.md | 3 ++- .../deploy/cluster-api/generic/capi.md | 18 ++++++++++++------ 2 files changed, 14 insertions(+), 7 deletions(-) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index d15102bf..5500c8bb 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -30,10 +30,11 @@ nodes: EOF ``` -2. Enable the `CLUSTER_TOPOLOGY` feature gate: +2. Enable the `CLUSTER_TOPOLOGY` and `EXP_CLUSTER_RESOURCE_SET` feature gates: ```bash export CLUSTER_TOPOLOGY=true +export EXP_CLUSTER_RESOURCE_SET=true ``` 3. Initialize the management cluster: diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index bac67ca4..8a0760af 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -14,7 +14,13 @@ For the deployment using the Docker provider (local testing/development only), p We will provision a generic cluster with the TAS installed using Cluster API. This was tested using a the GCP provider. -1. In your management cluster, with all your environment variables set to generate cluster definitions, run for example: +1. Enable the `EXP_CLUSTER_RESOURCE_SET` feature gate: + +```bash +export EXP_CLUSTER_RESOURCE_SET=true +``` + +2. In your management cluster, with all your environment variables set to generate cluster definitions, run for example: ```bash clusterctl generate cluster scheduling-dev-wkld \ @@ -30,7 +36,7 @@ Be aware that you will need to install a CNI such as Calico before the cluster w Calico works for the great majority of providers, so all configurations have been provided for your convenience, i.e. ClusterResourceSet, CRS label in Cluster and CRS ConfigMap). For more information, see [Deploy a CNI solution](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution) in the CAPI quickstart. -2. Merge the contents of the resources provided in `../shared/cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with +3. Merge the contents of the resources provided in `../shared/cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with `capi-quickstart.yaml`. If you move `KubeadmControlPlane` in its own file, you can use the convenient `yq` utility: @@ -47,7 +53,7 @@ The new config will: - Change the `dnsPolicy` of the scheduler to `ClusterFirstWithHostNet` - Place `KubeSchedulerConfiguration` into control plane nodes and pass the relative CLI flag to the scheduler. -3. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: +4. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g. additional metric scraping configurations), then render the charts: @@ -99,7 +105,7 @@ ClusterRole. We will join the TAS manifests together, so we can have a single Co yq '.' ../tas-*.yaml > tas.yaml ``` -4. Create and apply the ConfigMaps +5. Create and apply the ConfigMaps ```bash kubectl create configmap custom-metrics-tls-secret-configmap --from-file=./custom-metrics-tls-secret.yaml -o yaml --dry-run=client > custom-metrics-tls-secret-configmap.yaml @@ -118,12 +124,12 @@ Apply to the management cluster: kubectl apply -f '*-configmap.yaml' ``` -5. Apply the ClusterResourceSets +6. Apply the ClusterResourceSets ClusterResourceSets resources are already given to you in `../shared/clusterresourcesets.yaml`. Apply them to the management cluster with `kubectl apply -f ./shared/clusterresourcesets.yaml` -6. Apply the cluster manifests +7. Apply the cluster manifests Finally, you can apply your manifests `kubectl apply -f capi-quickstart.yaml`. The Telemetry Aware Scheduler will be running on your new cluster. From d3cd12c277e0275783918aef67336c35233d0f19 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 12:44:29 +0100 Subject: [PATCH 16/21] Update paths of commands referencing the Helm chart. Forgot to rename capi-quickstart somewhere. --- .../deploy/cluster-api/docker/capi-docker.md | 12 ++++++------ .../deploy/cluster-api/generic/capi.md | 8 ++++---- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 5500c8bb..10485c67 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -71,9 +71,9 @@ First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you additional metric scraping configurations), then render the charts: ```bash -helm template ../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml -helm template ../charts/prometheus_helm_chart/ > prometheus.yaml -helm template ../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml +helm template ../../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml +helm template ../../charts/prometheus_helm_chart/ > prometheus.yaml +helm template ../../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml ``` You need to add namespaces resources, else resource application will fail. Prepend the following to `prometheus.yaml`: @@ -139,7 +139,7 @@ kubectl apply -f '*-configmap.yaml' 7. Apply the ClusterResourceSets ClusterResourceSets resources are already given to you in `../shared/clusterresourcesets.yaml`. -Apply them to the management cluster with `kubectl apply -f ./shared/clusterresourcesets.yaml` +Apply them to the management cluster with `kubectl apply -f ../shared/clusterresourcesets.yaml` 8. Apply the cluster manifests @@ -148,13 +148,13 @@ The Telemetry Aware Scheduler will be running on your new cluster. You can conne exporting its kubeconfig: ```bash -clusterctl get kubeconfig ecoqube-dev > ecoqube-dev.kubeconfig +clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig ``` Then, specifically for the CAPD docker, point the kubeconfig to the correct address of the HAProxy container: ```bash -sed -i -e "s/server:.*/server: https:\/\/$(docker port ecoqube-dev-lb 6443/tcp | sed "s/0.0.0.0/127.0.0.1/")/g" ./ecoqube-dev.kubeconfig +sed -i -e "s/server:.*/server: https:\/\/$(docker port capi-quickstart-lb 6443/tcp | sed "s/0.0.0.0/127.0.0.1/")/g" ./capi-quickstart.kubeconfig ``` You can test if the scheduler actually works by following this guide: diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index 8a0760af..9ff6c62a 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -59,9 +59,9 @@ First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you additional metric scraping configurations), then render the charts: ```bash -helm template ../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml -helm template ../charts/prometheus_helm_chart/ > prometheus.yaml -helm template ../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml +helm template ../../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml +helm template ../../charts/prometheus_helm_chart/ > prometheus.yaml +helm template ../../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml ``` You need to add namespaces resources, else resource application will fail. Prepend the following to `prometheus.yaml`: @@ -127,7 +127,7 @@ kubectl apply -f '*-configmap.yaml' 6. Apply the ClusterResourceSets ClusterResourceSets resources are already given to you in `../shared/clusterresourcesets.yaml`. -Apply them to the management cluster with `kubectl apply -f ./shared/clusterresourcesets.yaml` +Apply them to the management cluster with `kubectl apply -f ../shared/clusterresourcesets.yaml` 7. Apply the cluster manifests From 0891f21f7513d05b197276a1ceb1a899a784e4cd Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 16:21:58 +0100 Subject: [PATCH 17/21] Add yq commands to wrangle with the various resources with the command line. --- .../deploy/cluster-api/docker/capi-docker.md | 48 +++++++++++++++++-- .../docker/clusterclass-patch.yaml | 33 +++++++++---- .../deploy/cluster-api/generic/capi.md | 24 +++++++--- .../cluster-api/generic/cluster-patch.yaml | 6 --- 4 files changed, 86 insertions(+), 25 deletions(-) delete mode 100644 telemetry-aware-scheduling/deploy/cluster-api/generic/cluster-patch.yaml diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 10485c67..4ed31985 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -56,14 +56,56 @@ clusterctl generate cluster capi-quickstart --flavor development \ If Kind was running correctly, and the Docker provider was initialized with the previous command, the command will return nothing to indicate success. 4. Merge the contents of the resources provided in `../shared/cluster-patch.yaml`, `kubeadmcontrolplanetemplate-patch.yaml` and `clusterclass-patch.yaml` with - `capi-quickstart.yaml`. + the resources contained in `capi-quickstart.yaml`. The new config will: - Configure TLS certificates for the extender - Change the `dnsPolicy` of the scheduler to `ClusterFirstWithHostNet` - Place `KubeSchedulerConfiguration` into control plane nodes and pass the relative CLI flag to the scheduler. -- Change the behavior of the pre-existing patch application of `/spec/template/spec/kubeadmConfigSpec/files` in `ClusterClass` -such that our new patch is not ignored/overwritten. For some more clarification on this, see [this issue](https://github.com/kubernetes-sigs/cluster-api/pull/7630). +- Change the behavior of the pre-existing patch application of `/spec/template/spec/kubeadmConfigSpec/files` in `ClusterClass` + such that our new patch is not ignored/overwritten. For some more clarification on this, see [this issue](https://github.com/kubernetes-sigs/cluster-api/pull/7630). +- Add the necessary labels for ClusterResourceSet to take effect in the workload cluster. + +Therefore, we will: +- Merge the contents of file `kubeadmcontrolplanetemplate-patch.yaml` into the KubeadmControlPlaneTemplate resource of capi-quickstart.yaml. +- Replace entirely the KubeadmControlPlaneTemplate patch item with `path` `/spec/template/spec/kubeadmConfigSpec/files` with the item present in file `clusterclass-patch.yaml`. +- Add the necessary labels to the Cluster resource of capi-quickstart.yaml. + +To do this, we provide some quick `yq` commands to automate the process, but you can also merge the files manually. + +Patch the KubeadmControlPlaneTemplate resource by merging the contents of `kubeadmcontrolplanetemplate-patch.yaml` with the one contained in `capi-quickstart.yaml`: +```bash +# Extract KubeadmControlPlaneTemplate +yq e '. | select(.kind == "KubeadmControlPlaneTemplate")' capi-quickstart.yaml > kubeadmcontrolplanetemplate.yaml +# Merge patch +yq eval-all '. as $item ireduce ({}; . *+ $item)' kubeadmcontrolplanetemplate.yaml kubeadmcontrolplanetemplate-patch.yaml > final-kubeadmcontrolplanetemplate.yaml +# Replace the original KubeadmControlPlaneTemplate with the patched one +export KCPT_FINAL=$( clusterclass.yaml +export CC_PATCH=$( final-clusterclass.yaml +# Replace the ClusterClass in capi-quickstart.yaml with the new one +export CC_FINAL=$( cluster.yaml +yq -i eval-all '. as $item ireduce ({}; . *+ $item)' cluster.yaml ../shared/cluster-patch.yaml +``` + +you should end up with something like [this](sample-capi-manifests.yaml). 5. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml b/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml index b82e8e4b..408e8dfe 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/clusterclass-patch.yaml @@ -1,9 +1,24 @@ -apiVersion: cluster.x-k8s.io/v1beta1 -kind: ClusterClass -spec: - patches: - - definitions: - - jsonPatches: - - op: add - # Note: we must add a dash - after files, as shown below. Else the patch application in KubeadmControlPlaneTemplate will fail! - path: /spec/template/spec/kubeadmConfigSpec/files/- +op: add +path: /spec/template/spec/kubeadmConfigSpec/files/- +valueFrom: + template: | + content: | + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: PodSecurity + configuration: + apiVersion: pod-security.admission.config.k8s.io/v1beta1 + kind: PodSecurityConfiguration + defaults: + enforce: "{{ .podSecurityStandard.enforce }}" + enforce-version: "latest" + audit: "{{ .podSecurityStandard.audit }}" + audit-version: "latest" + warn: "{{ .podSecurityStandard.warn }}" + warn-version: "latest" + exemptions: + usernames: [] + runtimeClasses: [] + namespaces: [kube-system] + path: /etc/kubernetes/kube-apiserver-admission-pss.yaml \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index 9ff6c62a..8342833b 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -39,20 +39,30 @@ For more information, see [Deploy a CNI solution](https://cluster-api.sigs.k8s.i 3. Merge the contents of the resources provided in `../shared/cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with `capi-quickstart.yaml`. -If you move `KubeadmControlPlane` in its own file, you can use the convenient `yq` utility: +The new config will: +- Configure TLS certificates for the extender +- Change the `dnsPolicy` of the scheduler to `ClusterFirstWithHostNet` +- Place `KubeSchedulerConfiguration` into control plane nodes and pass the relative CLI flag to the scheduler. +- Add the necessary labels for ClusterResourceSet to take effect in the workload cluster. + +Therefore, we will: +- Merge the contents of file `kubeadmcontrolplane-patch.yaml` into the KubeadmControlPlane resource of capi-quickstart.yaml. +- Add the necessary labels to the Cluster resource of capi-quickstart.yaml. + +To do this, we provide some quick `yq` commands to automate the process, but you can also merge the files manually. > Note that if you are already using patches, `directory: /tmp/kubeadm/patches` must coincide, else the property will be > overwritten. ```bash -yq eval-all '. as $item ireduce ({}; . *+ $item)' your-own-kubeadmcontrolplane.yaml kubeadmcontrolplane-patch.yaml > final-kubeadmcontrolplane.yaml +# Extract KubeadmControlPlane +yq e '. | select(.kind == "KubeadmControlPlane")' capi-quickstart.yaml > kubeadmcontrolplane.yaml +yq eval-all '. as $item ireduce ({}; . *+ $item)' kubeadmcontrolplane.yaml kubeadmcontrolplane-patch.yaml > final-kubeadmcontrolplane.yaml +# Replace the original KubeadmControlPlane with the patched one +export KCP_FINAL=$( Date: Fri, 27 Jan 2023 16:33:13 +0100 Subject: [PATCH 18/21] Reformat docs. --- .../deploy/cluster-api/docker/capi-docker.md | 8 ++++---- .../deploy/cluster-api/generic/capi.md | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 4ed31985..17abc366 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -1,6 +1,6 @@ # Cluster API deployment - Docker provider (for local testing/development only) -** This guide is meant for local testing/development only, this is not meant for production usage.** +**This guide is meant for local testing/development only, this is not meant for production usage.** For the deployment using a generic provider, please refer to [Cluster API deployment - Generic provider](capi.md). @@ -55,8 +55,8 @@ clusterctl generate cluster capi-quickstart --flavor development \ If Kind was running correctly, and the Docker provider was initialized with the previous command, the command will return nothing to indicate success. -4. Merge the contents of the resources provided in `../shared/cluster-patch.yaml`, `kubeadmcontrolplanetemplate-patch.yaml` and `clusterclass-patch.yaml` with - the resources contained in `capi-quickstart.yaml`. +4. Merge the contents of the resources provided in [../shared/cluster-patch.yaml](../shared/cluster-patch.yaml), [kubeadmcontrolplanetemplate-patch.yaml](kubeadmcontrolplanetemplate-patch.yaml) and [clusterclass-patch.yaml](clusterclass-patch.yaml) with + the resources contained in your newly generated capi-quickstart.yaml. The new config will: - Configure TLS certificates for the extender @@ -180,7 +180,7 @@ kubectl apply -f '*-configmap.yaml' 7. Apply the ClusterResourceSets -ClusterResourceSets resources are already given to you in `../shared/clusterresourcesets.yaml`. +ClusterResourceSets resources are already given to you in [../shared/clusterresourcesets.yaml](../shared/clusterresourcesets.yaml). Apply them to the management cluster with `kubectl apply -f ../shared/clusterresourcesets.yaml` 8. Apply the cluster manifests diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index 8342833b..b431ed9f 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -1,6 +1,6 @@ # Cluster API deployment - Generic provider -** This guide is meant for local testing/development only, this is not meant for production usage.** +**This guide is meant for local testing/development only, this is not meant for production usage.** For the deployment using the Docker provider (local testing/development only), please refer to [Cluster API deployment - Generic provider](capi.md). @@ -36,8 +36,8 @@ Be aware that you will need to install a CNI such as Calico before the cluster w Calico works for the great majority of providers, so all configurations have been provided for your convenience, i.e. ClusterResourceSet, CRS label in Cluster and CRS ConfigMap). For more information, see [Deploy a CNI solution](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution) in the CAPI quickstart. -3. Merge the contents of the resources provided in `../shared/cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with - `capi-quickstart.yaml`. +3. Merge the contents of the resources provided in [../shared/cluster-patch.yaml](../shared/cluster-patch.yaml) and [kubeadmcontrolplane-patch.yaml](kubeadmcontrolplane-patch.yaml) with + the resources contained in your newly generated `capi-quickstart.yaml`. The new config will: - Configure TLS certificates for the extender @@ -136,7 +136,7 @@ kubectl apply -f '*-configmap.yaml' 6. Apply the ClusterResourceSets -ClusterResourceSets resources are already given to you in `../shared/clusterresourcesets.yaml`. +ClusterResourceSets resources are already given to you in [../shared/clusterresourcesets.yaml](../shared/clusterresourcesets.yaml). Apply them to the management cluster with `kubectl apply -f ../shared/clusterresourcesets.yaml` 7. Apply the cluster manifests From 299571e146d4d1b18a216482650af6cbc2301c33 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 16:38:21 +0100 Subject: [PATCH 19/21] Add a few more links to files/folders. --- .../deploy/cluster-api/docker/capi-docker.md | 12 ++++++------ .../deploy/cluster-api/generic/capi.md | 4 ++-- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 17abc366..a8144e33 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -56,7 +56,7 @@ clusterctl generate cluster capi-quickstart --flavor development \ If Kind was running correctly, and the Docker provider was initialized with the previous command, the command will return nothing to indicate success. 4. Merge the contents of the resources provided in [../shared/cluster-patch.yaml](../shared/cluster-patch.yaml), [kubeadmcontrolplanetemplate-patch.yaml](kubeadmcontrolplanetemplate-patch.yaml) and [clusterclass-patch.yaml](clusterclass-patch.yaml) with - the resources contained in your newly generated capi-quickstart.yaml. + the resources contained in your newly generated `capi-quickstart.yaml`. The new config will: - Configure TLS certificates for the extender @@ -67,13 +67,13 @@ The new config will: - Add the necessary labels for ClusterResourceSet to take effect in the workload cluster. Therefore, we will: -- Merge the contents of file `kubeadmcontrolplanetemplate-patch.yaml` into the KubeadmControlPlaneTemplate resource of capi-quickstart.yaml. +- Merge the contents of file [kubeadmcontrolplanetemplate-patch.yaml](kubeadmcontrolplanetemplate-patch.yaml) into the KubeadmControlPlaneTemplate resource of capi-quickstart.yaml. - Replace entirely the KubeadmControlPlaneTemplate patch item with `path` `/spec/template/spec/kubeadmConfigSpec/files` with the item present in file `clusterclass-patch.yaml`. -- Add the necessary labels to the Cluster resource of capi-quickstart.yaml. +- Add the necessary labels to the Cluster resource of `capi-quickstart.yaml`. To do this, we provide some quick `yq` commands to automate the process, but you can also merge the files manually. -Patch the KubeadmControlPlaneTemplate resource by merging the contents of `kubeadmcontrolplanetemplate-patch.yaml` with the one contained in `capi-quickstart.yaml`: +Patch the KubeadmControlPlaneTemplate resource by merging the contents of [kubeadmcontrolplanetemplate-patch.yaml](kubeadmcontrolplanetemplate-patch.yaml) with the one contained in `capi-quickstart.yaml`: ```bash # Extract KubeadmControlPlaneTemplate yq e '. | select(.kind == "KubeadmControlPlaneTemplate")' capi-quickstart.yaml > kubeadmcontrolplanetemplate.yaml @@ -109,7 +109,7 @@ you should end up with something like [this](sample-capi-manifests.yaml). 5. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: -First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g. +First, under [telemetry-aware-scheduling/deploy/charts](../../../deploy/charts) tweak the charts if you need (e.g. additional metric scraping configurations), then render the charts: ```bash @@ -141,7 +141,7 @@ metadata: The custom metrics adapter and the TAS deployment require TLS to be configured with a certificate and key. Information on how to generate correctly signed certs in kubernetes can be found [here](https://github.com/kubernetes-sigs/apiserver-builder-alpha/blob/master/docs/concepts/auth.md). -Files ``serving-ca.crt`` and ``serving-ca.key`` should be in the current working directory. +Files `serving-ca.crt` and `serving-ca.key` should be in the current working directory. Run the following: diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index b431ed9f..f1af2a5f 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -65,7 +65,7 @@ yq -i '. | select(.kind == "KubeadmControlPlane") = env(KCP_FINAL)' capi-quickst 4. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience: -First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g. +First, under [telemetry-aware-scheduling/deploy/charts](../../../deploy/charts) tweak the charts if you need (e.g. additional metric scraping configurations), then render the charts: ```bash @@ -97,7 +97,7 @@ metadata: The custom metrics adapter and the TAS deployment require TLS to be configured with a certificate and key. Information on how to generate correctly signed certs in kubernetes can be found [here](https://github.com/kubernetes-sigs/apiserver-builder-alpha/blob/master/docs/concepts/auth.md). -Files ``serving-ca.crt`` and ``serving-ca.key`` should be in the current working directory. +Files `serving-ca.crt` and `serving-ca.key` should be in the current working directory. Run the following: From ed3d30013de062426ef017a8eaa5a4345270c43e Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 16:41:19 +0100 Subject: [PATCH 20/21] Add note on how to initialize Kind cluster in Docker provider. --- .../deploy/cluster-api/docker/capi-docker.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index a8144e33..09febf34 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -39,6 +39,14 @@ export EXP_CLUSTER_RESOURCE_SET=true 3. Initialize the management cluster: +Note to start the Kind cluster, you will need to run the following command: + +```bash +kind create cluster --config kind-cluster-with-extramounts.yaml +``` + +then ,to initialize the Docker provider: + ```bash clusterctl init --infrastructure docker ``` From 8f98dd6884a86d9949cba09d7186ceb919f4eea9 Mon Sep 17 00:00:00 2001 From: Cristiano Colangelo Date: Fri, 27 Jan 2023 17:23:28 +0100 Subject: [PATCH 21/21] More adjustments. --- .../deploy/cluster-api/docker/capi-docker.md | 37 +- .../docker/sample-capi-manifests.yaml | 402 ++++++++++++++++++ .../deploy/cluster-api/generic/capi.md | 26 +- .../{calico.yaml => calico-configmap.yaml} | 0 4 files changed, 449 insertions(+), 16 deletions(-) create mode 100644 telemetry-aware-scheduling/deploy/cluster-api/docker/sample-capi-manifests.yaml rename telemetry-aware-scheduling/deploy/cluster-api/shared/{calico.yaml => calico-configmap.yaml} (100%) diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md index 09febf34..e7a3f934 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/capi-docker.md @@ -6,8 +6,6 @@ For the deployment using a generic provider, please refer to [Cluster API deploy ## Requirements -- A management cluster provisioned in your infrastructure of choice and the relative tooling. - See [Cluster API Quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html). - Run Kubernetes v1.22 or greater (tested on Kubernetes v1.25). - Docker (tested on Docker version 20.10.22) - Kind (tested on Kind version 0.17.0) @@ -39,13 +37,13 @@ export EXP_CLUSTER_RESOURCE_SET=true 3. Initialize the management cluster: -Note to start the Kind cluster, you will need to run the following command: +Note to start the Kind cluster, you will need to run the following command. See also [Cluster API Quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html): ```bash kind create cluster --config kind-cluster-with-extramounts.yaml ``` -then ,to initialize the Docker provider: +then, to initialize the Docker provider: ```bash clusterctl init --infrastructure docker @@ -56,7 +54,7 @@ Run the following to generate the default cluster manifests: ```bash clusterctl generate cluster capi-quickstart --flavor development \ --kubernetes-version v1.25.0 \ - --control-plane-machine-count=3 \ + --control-plane-machine-count=1 \ --worker-machine-count=3 \ > capi-quickstart.yaml ``` @@ -110,7 +108,9 @@ Add the necessary labels to the Cluster resource: ```bash # Extract Cluster yq e '. | select(.kind == "Cluster")' capi-quickstart.yaml > cluster.yaml -yq -i eval-all '. as $item ireduce ({}; . *+ $item)' cluster.yaml ../shared/cluster-patch.yaml +yq eval-all '. as $item ireduce ({}; . *+ $item)' cluster.yaml ../shared/cluster-patch.yaml > final-cluster.yaml +export C_FINAL=$( tas.yaml +yq '.' ../../tas-*.yaml > tas.yaml ``` 6. Create and apply the ConfigMaps @@ -176,7 +176,7 @@ kubectl create configmap prometheus-configmap --from-file=./prometheus.yaml -o y kubectl create configmap prometheus-node-exporter-configmap --from-file=./prometheus-node-exporter.yaml -o yaml --dry-run=client > prometheus-node-exporter-configmap.yaml kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml -kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml +kubectl create configmap extender-configmap --from-file=../../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml kubectl create configmap calico-configmap --from-file=../shared/calico-configmap.yaml -o yaml --dry-run=client > calico-configmap.yaml ``` @@ -193,15 +193,28 @@ Apply them to the management cluster with `kubectl apply -f ../shared/clusterres 8. Apply the cluster manifests -Finally, you can apply your manifests `kubectl apply -f capi-quickstart.yaml`. -The Telemetry Aware Scheduler will be running on your new cluster. You can connect to the workload cluster by -exporting its kubeconfig: +Finally, you can apply your manifests: + +```bash +kubectl apply -f capi-quickstart.yaml +``` + +Wait until the cluster is fully initialized. You can use the following command to check its status (it should take a few minutes). +Note that both `INITIALIZED` and `API SERVER AVAILABLE` should be set to true: + +```bash +watch -n 1 kubectl get kubeadmcontrolplane +``` + +The Telemetry Aware Scheduler will be running on your new cluster. + +You can connect to the workload cluster by exporting its kubeconfig: ```bash clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig ``` -Then, specifically for the CAPD docker, point the kubeconfig to the correct address of the HAProxy container: +Then, specifically for the CAPD provider, point the kubeconfig to the correct address of the HAProxy container: ```bash sed -i -e "s/server:.*/server: https:\/\/$(docker port capi-quickstart-lb 6443/tcp | sed "s/0.0.0.0/127.0.0.1/")/g" ./capi-quickstart.kubeconfig diff --git a/telemetry-aware-scheduling/deploy/cluster-api/docker/sample-capi-manifests.yaml b/telemetry-aware-scheduling/deploy/cluster-api/docker/sample-capi-manifests.yaml new file mode 100644 index 00000000..6425e01f --- /dev/null +++ b/telemetry-aware-scheduling/deploy/cluster-api/docker/sample-capi-manifests.yaml @@ -0,0 +1,402 @@ +apiVersion: cluster.x-k8s.io/v1beta1 +kind: ClusterClass +metadata: + name: quick-start + namespace: default +spec: + controlPlane: + machineInfrastructure: + ref: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: DockerMachineTemplate + name: quick-start-control-plane + ref: + apiVersion: controlplane.cluster.x-k8s.io/v1beta1 + kind: KubeadmControlPlaneTemplate + name: quick-start-control-plane + infrastructure: + ref: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: DockerClusterTemplate + name: quick-start-cluster + patches: + - definitions: + - jsonPatches: + - op: add + path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/imageRepository + valueFrom: + variable: imageRepository + selector: + apiVersion: controlplane.cluster.x-k8s.io/v1beta1 + kind: KubeadmControlPlaneTemplate + matchResources: + controlPlane: true + description: Sets the imageRepository used for the KubeadmControlPlane. + enabledIf: '{{ ne .imageRepository "" }}' + name: imageRepository + - definitions: + - jsonPatches: + - op: add + path: /spec/template/spec/kubeadmConfigSpec/initConfiguration/nodeRegistration/kubeletExtraArgs/cgroup-driver + value: cgroupfs + - op: add + path: /spec/template/spec/kubeadmConfigSpec/joinConfiguration/nodeRegistration/kubeletExtraArgs/cgroup-driver + value: cgroupfs + selector: + apiVersion: controlplane.cluster.x-k8s.io/v1beta1 + kind: KubeadmControlPlaneTemplate + matchResources: + controlPlane: true + description: | + Sets the cgroupDriver to cgroupfs if a Kubernetes version < v1.24 is referenced. + This is required because kind and the node images do not support the default + systemd cgroupDriver for kubernetes < v1.24. + enabledIf: '{{ semverCompare "<= v1.23" .builtin.controlPlane.version }}' + name: cgroupDriver-controlPlane + - definitions: + - jsonPatches: + - op: add + path: /spec/template/spec/joinConfiguration/nodeRegistration/kubeletExtraArgs/cgroup-driver + value: cgroupfs + selector: + apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 + kind: KubeadmConfigTemplate + matchResources: + machineDeploymentClass: + names: + - default-worker + description: | + Sets the cgroupDriver to cgroupfs if a Kubernetes version < v1.24 is referenced. + This is required because kind and the node images do not support the default + systemd cgroupDriver for kubernetes < v1.24. + enabledIf: '{{ semverCompare "<= v1.23" .builtin.machineDeployment.version }}' + name: cgroupDriver-machineDeployment + - definitions: + - jsonPatches: + - op: add + path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/etcd + valueFrom: + template: | + local: + imageTag: {{ .etcdImageTag }} + selector: + apiVersion: controlplane.cluster.x-k8s.io/v1beta1 + kind: KubeadmControlPlaneTemplate + matchResources: + controlPlane: true + description: Sets tag to use for the etcd image in the KubeadmControlPlane. + name: etcdImageTag + - definitions: + - jsonPatches: + - op: add + path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/dns + valueFrom: + template: | + imageTag: {{ .coreDNSImageTag }} + selector: + apiVersion: controlplane.cluster.x-k8s.io/v1beta1 + kind: KubeadmControlPlaneTemplate + matchResources: + controlPlane: true + description: Sets tag to use for the etcd image in the KubeadmControlPlane. + name: coreDNSImageTag + - definitions: + - jsonPatches: + - op: add + path: /spec/template/spec/customImage + valueFrom: + template: | + kindest/node:{{ .builtin.machineDeployment.version | replace "+" "_" }} + selector: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: DockerMachineTemplate + matchResources: + machineDeploymentClass: + names: + - default-worker + - jsonPatches: + - op: add + path: /spec/template/spec/customImage + valueFrom: + template: | + kindest/node:{{ .builtin.controlPlane.version | replace "+" "_" }} + selector: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: DockerMachineTemplate + matchResources: + controlPlane: true + description: Sets the container image that is used for running dockerMachines for the controlPlane and default-worker machineDeployments. + name: customImage + - definitions: + - jsonPatches: + - op: add + path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/apiServer/extraArgs + value: + admission-control-config-file: /etc/kubernetes/kube-apiserver-admission-pss.yaml + - op: add + path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/apiServer/extraVolumes + value: + - hostPath: /etc/kubernetes/kube-apiserver-admission-pss.yaml + mountPath: /etc/kubernetes/kube-apiserver-admission-pss.yaml + name: admission-pss + pathType: File + readOnly: true + - op: add + path: /spec/template/spec/kubeadmConfigSpec/files/- + valueFrom: + template: |- + content: | + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: PodSecurity + configuration: + apiVersion: pod-security.admission.config.k8s.io/v1beta1 + kind: PodSecurityConfiguration + defaults: + enforce: "{{ .podSecurityStandard.enforce }}" + enforce-version: "latest" + audit: "{{ .podSecurityStandard.audit }}" + audit-version: "latest" + warn: "{{ .podSecurityStandard.warn }}" + warn-version: "latest" + exemptions: + usernames: [] + runtimeClasses: [] + namespaces: [kube-system] + path: /etc/kubernetes/kube-apiserver-admission-pss.yaml + selector: + apiVersion: controlplane.cluster.x-k8s.io/v1beta1 + kind: KubeadmControlPlaneTemplate + matchResources: + controlPlane: true + description: Adds an admission configuration for PodSecurity to the kube-apiserver. + enabledIf: '{{ .podSecurityStandard.enabled }}' + name: podSecurityStandard + variables: + - name: imageRepository + required: true + schema: + openAPIV3Schema: + default: "" + description: imageRepository sets the container registry to pull images from. If empty, nothing will be set and the from of kubeadm will be used. + example: registry.k8s.io + type: string + - name: etcdImageTag + required: true + schema: + openAPIV3Schema: + default: "" + description: etcdImageTag sets the tag for the etcd image. + example: 3.5.3-0 + type: string + - name: coreDNSImageTag + required: true + schema: + openAPIV3Schema: + default: "" + description: coreDNSImageTag sets the tag for the coreDNS image. + example: v1.8.5 + type: string + - name: podSecurityStandard + required: false + schema: + openAPIV3Schema: + properties: + audit: + default: restricted + description: audit sets the level for the audit PodSecurityConfiguration mode. One of privileged, baseline, restricted. + type: string + enabled: + default: true + description: enabled enables the patches to enable Pod Security Standard via AdmissionConfiguration. + type: boolean + enforce: + default: baseline + description: enforce sets the level for the enforce PodSecurityConfiguration mode. One of privileged, baseline, restricted. + type: string + warn: + default: restricted + description: warn sets the level for the warn PodSecurityConfiguration mode. One of privileged, baseline, restricted. + type: string + type: object + workers: + machineDeployments: + - class: default-worker + template: + bootstrap: + ref: + apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 + kind: KubeadmConfigTemplate + name: quick-start-default-worker-bootstraptemplate + infrastructure: + ref: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: DockerMachineTemplate + name: quick-start-default-worker-machinetemplate +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: DockerClusterTemplate +metadata: + name: quick-start-cluster + namespace: default +spec: + template: + spec: {} +--- +apiVersion: controlplane.cluster.x-k8s.io/v1beta1 +kind: KubeadmControlPlaneTemplate +metadata: + name: quick-start-control-plane + namespace: default +spec: + template: + spec: + kubeadmConfigSpec: + clusterConfiguration: + apiServer: + certSANs: + - localhost + - 127.0.0.1 + - 0.0.0.0 + - host.docker.internal + controllerManager: + extraArgs: + enable-hostpath-provisioner: "true" + scheduler: + extraArgs: + config: "/etc/kubernetes/schedulerconfig/scheduler-componentconfig.yaml" + extraVolumes: + - hostPath: "/etc/kubernetes/schedulerconfig" + mountPath: "/etc/kubernetes/schedulerconfig" + name: schedulerconfig + - hostPath: "/etc/kubernetes/pki/ca.key" + mountPath: "/host/certs/client.key" + name: cacert + - hostPath: "/etc/kubernetes/pki/ca.crt" + mountPath: "/host/certs/client.crt" + name: clientcert + initConfiguration: + nodeRegistration: + criSocket: unix:///var/run/containerd/containerd.sock + kubeletExtraArgs: + eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0% + patches: + directory: /etc/tas/patches + joinConfiguration: + nodeRegistration: + criSocket: unix:///var/run/containerd/containerd.sock + kubeletExtraArgs: + eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0% + patches: + directory: /etc/tas/patches + files: + - path: /etc/kubernetes/schedulerconfig/scheduler-componentconfig.yaml + content: | + apiVersion: kubescheduler.config.k8s.io/v1 + kind: KubeSchedulerConfiguration + clientConnection: + kubeconfig: /etc/kubernetes/scheduler.conf + extenders: + - urlPrefix: "https://tas-service.default.svc.cluster.local:9001" + prioritizeVerb: "scheduler/prioritize" + filterVerb: "scheduler/filter" + weight: 1 + enableHTTPS: true + managedResources: + - name: "telemetry/scheduling" + ignoredByScheduler: true + ignorable: true + tlsConfig: + insecure: false + certFile: "/host/certs/client.crt" + keyFile: "/host/certs/client.key" + - path: /etc/tas/patches/kube-scheduler+json.json + content: |- + [ + { + "op": "add", + "path": "/spec/dnsPolicy", + "value": "ClusterFirstWithHostNet" + } + ] +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: DockerMachineTemplate +metadata: + name: quick-start-control-plane + namespace: default +spec: + template: + spec: + extraMounts: + - containerPath: /var/run/docker.sock + hostPath: /var/run/docker.sock +--- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: DockerMachineTemplate +metadata: + name: quick-start-default-worker-machinetemplate + namespace: default +spec: + template: + spec: + extraMounts: + - containerPath: /var/run/docker.sock + hostPath: /var/run/docker.sock +--- +apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 +kind: KubeadmConfigTemplate +metadata: + name: quick-start-default-worker-bootstraptemplate + namespace: default +spec: + template: + spec: + joinConfiguration: + nodeRegistration: + criSocket: unix:///var/run/containerd/containerd.sock + kubeletExtraArgs: + eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0% +--- +apiVersion: cluster.x-k8s.io/v1beta1 +kind: Cluster +metadata: + name: capi-quickstart + namespace: default + labels: + scheduler: tas + cni: calico +spec: + clusterNetwork: + pods: + cidrBlocks: + - 192.168.0.0/16 + serviceDomain: cluster.local + services: + cidrBlocks: + - 10.128.0.0/12 + topology: + class: quick-start + controlPlane: + metadata: {} + replicas: 3 + variables: + - name: imageRepository + value: "" + - name: etcdImageTag + value: "" + - name: coreDNSImageTag + value: "" + - name: podSecurityStandard + value: + audit: restricted + enabled: true + enforce: baseline + warn: restricted + version: v1.25.0 + workers: + machineDeployments: + - class: default-worker + name: md-0 + replicas: 3 diff --git a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md index f1af2a5f..c7acba6f 100644 --- a/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md +++ b/telemetry-aware-scheduling/deploy/cluster-api/generic/capi.md @@ -23,7 +23,7 @@ export EXP_CLUSTER_RESOURCE_SET=true 2. In your management cluster, with all your environment variables set to generate cluster definitions, run for example: ```bash -clusterctl generate cluster scheduling-dev-wkld \ +clusterctl generate cluster capi-quickstart \ --kubernetes-version v1.25.0 \ --control-plane-machine-count=1 \ --worker-machine-count=3 \ @@ -112,7 +112,7 @@ You also need the TAS manifests (Deployment, Policy CRD and RBAC accounts) and t ClusterRole. We will join the TAS manifests together, so we can have a single ConfigMap for convenience: ```bash -yq '.' ../tas-*.yaml > tas.yaml +yq '.' ../../tas-*.yaml > tas.yaml ``` 5. Create and apply the ConfigMaps @@ -124,7 +124,7 @@ kubectl create configmap prometheus-configmap --from-file=./prometheus.yaml -o y kubectl create configmap prometheus-node-exporter-configmap --from-file=./prometheus-node-exporter.yaml -o yaml --dry-run=client > prometheus-node-exporter-configmap.yaml kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml -kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml +kubectl create configmap extender-configmap --from-file=../../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml kubectl create configmap calico-configmap --from-file=../shared/calico-configmap.yaml -o yaml --dry-run=client > calico-configmap.yaml ``` @@ -141,8 +141,26 @@ Apply them to the management cluster with `kubectl apply -f ../shared/clusterres 7. Apply the cluster manifests -Finally, you can apply your manifests `kubectl apply -f capi-quickstart.yaml`. +Finally, you can apply your manifests: + +```bash +kubectl apply -f capi-quickstart.yaml +``` + +Wait until the cluster is fully initialized. You can use the following command to check its status (it should take a few minutes). +Note that both `INITIALIZED` and `API SERVER AVAILABLE` should be set to true: + +```bash +watch -n 1 kubectl get kubeadmcontrolplane +``` + The Telemetry Aware Scheduler will be running on your new cluster. +You can connect to the workload cluster by exporting its kubeconfig: + +```bash +clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig +``` + You can test if the scheduler actually works by following this guide: [Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/master/telemetry-aware-scheduling/docs/health-metric-example.md) \ No newline at end of file diff --git a/telemetry-aware-scheduling/deploy/cluster-api/shared/calico.yaml b/telemetry-aware-scheduling/deploy/cluster-api/shared/calico-configmap.yaml similarity index 100% rename from telemetry-aware-scheduling/deploy/cluster-api/shared/calico.yaml rename to telemetry-aware-scheduling/deploy/cluster-api/shared/calico-configmap.yaml