Skip to content

Commit

Permalink
Few updates to wording and capitalizing vCluster
Browse files Browse the repository at this point in the history
Updates to wording and capitalizing vCluster
  • Loading branch information
mpetason committed Nov 17, 2023
1 parent 1f8d1bc commit 8591f41
Showing 1 changed file with 9 additions and 9 deletions.
18 changes: 9 additions & 9 deletions docs/pages/what-are-virtual-clusters.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,28 +10,28 @@ Virtual clusters are fully working Kubernetes clusters that run on top of other
<figcaption>vCluster - Architecture</figcaption>
</figure>

The virtual cluster itself only consists of the core Kubernetes components: API server, controller manager, storage backend (such as etcd, sqlite, mysql etc.) and optionally a scheduler. To reduce virtual cluster overhead, vCluster builds by default on [k3s](https://k3s.io/), which is a fully working, certified, lightweight Kubernetes distribution that compiles the Kubernetes components into a single binary and disables by default all not needed Kubernetes features, such as the pod scheduler or certain controllers.
The virtual cluster itself only consists of the core Kubernetes components: API server, controller manager, storage backend (such as etcd, sqlite, mysql etc.) and optionally a scheduler. To reduce virtual cluster overhead, vCluster builds by default on [k3s](https://k3s.io/), which is a fully working, certified, lightweight Kubernetes distribution that compiles the Kubernetes components into a single binary and disables by default all unneeded Kubernetes features, such as the pod scheduler or certain controllers.

Besides k3s, other Kubernetes distributions such as [k0s and vanilla k8s are supported](./deploying-vclusters/supported-distros.mdx). In addition to the control plane, there is also a Kubernetes hypervisor that emulates networking and worker nodes inside the virtual cluster. This component syncs a handful of core resources that are essential for cluster functionality between the virtual and host cluster:

* **Pods**: All pods that are started in the virtual cluster are rewritten and then started in the namespace of the virtual cluster in the host cluster. Service account tokens, environment variables, DNS and other configurations are exchanged to point to the virtual cluster instead of the host cluster. Within the pod, it so seems that the pod is started within the virtual cluster instead of the host cluster.
* **Pods**: All pods that are started in the virtual cluster are rewritten and then started in the namespace of the virtual cluster in the host cluster. Service account tokens, environment variables, DNS and other configurations are exchanged to point to the virtual cluster instead of the host cluster. Within the pod, so it seems that the pod is started within the virtual cluster instead of the host cluster.
* **Services**: All services and endpoints are rewritten and created in the namespace of the virtual cluster in the host cluster. The virtual and host cluster share the same service cluster IPs. This also means that a service in the host cluster can be reached from within the virtual cluster without any performance penalties.
* **PersistentVolumeClaims**: If persistent volume claims are created in the virtual cluster, they will be mutated and created in the namespace of the virtual cluster in the host cluster. If they are bound in the host cluster, the corresponding persistent volume information will be synced back to the virtual cluster.
* **Configmaps & Secrets**: ConfigMaps or secrets in the virtual cluster that are mounted to pods will be synced to the host cluster, all other configmaps or secrets will purely stay in the virtual cluster.
* **Other Resources**: Deployments, statefulsets, CRDs, service accounts etc. are **NOT** synced to the host cluster and purely exist in the virtual cluster.

See [synced resources](./syncer/core_resources.mdx) for more information about what resources are synced exactly.
See [synced resources](./syncer/core_resources.mdx) for more information about what resources are synced.

In addition to the synchronization of virtual and host cluster resources, the hypervisor proxies certain Kubernetes API requests to the host cluster, such as pod port forwarding or container command execution. It essentially acts as a reverse proxy for the virtual cluster.

## Why use Virtual Kubernetes Clusters?

Virtual clusters can be used to partition a single physical cluster into multiple ones while leveraging the benefits of Kubernetes itself, such as optimal resource distribution and workload management.
While Kubernetes itself already provides namespaces for multiple environments, they are limited in terms of cluster-scoped resources and control-plane usage:
Virtual clusters can be used to partition a single physical cluster into multiple clusters while leveraging the benefits of Kubernetes, such as optimal resource distribution and workload management.
While Kubernetes already provides namespaces for multiple environments, they are limited in terms of cluster-scoped resources and control-plane usage:

* **Cluster-Scoped Resources**: Certain resources live globally in the cluster, and you can’t isolate them using namespaces. For example, installing an operator in different versions at the same time is not possible within a single cluster.
* **Cluster-Scoped Resources**: Certain resources live globally in the cluster, and you can’t isolate them using namespaces. For example, installing different versions of an operator at the same time is not possible within a single cluster.

* **Shared Kubernetes control plane**: the API server, etcd, scheduler, and controller-manager are shared in a single Kubernetes cluster across all namespaces. Request or storage rate-limiting based on a namespace is very hard to enforce and faulty configuration might bring down the whole cluster.
* **Shared Kubernetes Control Plane**: The API server, etcd, scheduler, and controller-manager are shared in a single Kubernetes cluster across all namespaces. Request or storage rate-limiting based on a namespace is very hard to enforce and faulty configuration might bring down the whole cluster.

Virtual clusters also provide more stability than namespaces in many situations. The virtual cluster creates its own Kubernetes resource objects, which are stored in its own data store. The host cluster has no knowledge of these resources.

Expand All @@ -42,7 +42,7 @@ Because you can have many virtual clusters within a single cluster, they are muc
Finally, virtual clusters can be configured independently of the physical cluster. This is great for multi-tenancy, like giving your customers the ability to spin up a new environment or quickly setting up demo applications for your sales team.

<figure>
<img src="/docs/media/vcluster-comparison.png" alt="vcluster Comparison" />
<figcaption>vcluster - Comparison</figcaption>
<img src="/docs/media/vcluster-comparison.png" alt="vCluster Comparison" />
<figcaption>vCluster - Comparison</figcaption>
</figure>

0 comments on commit 8591f41

Please sign in to comment.