Skip to content

Latest commit

 

History

History

openstack

Talos on OVH Cloud

This terraform example to install Talos on OpenStack with IPv4/IPv6 support.

Tested on openstack version - Stein

  • Nova
  • Glance
  • Neutron
  • Cinder

Local utilities

  • terraform
  • talosctl
  • kubectl
  • yq

Network diagram

  • Public and private subnets have one L2 layer network. So they can reach each other without a gateway.
  • Only public interface in public network has firewall.
  • Virtual machines in public subnet have public interface with publick IPv4/IPv6, and local IPv4/IPv6.
  • Virtual machines in private subnet have public interface with only publick IPv6 and local IPv4/IPv6. It is not a classic private network, this network has NATv4, and they use one Public IPv4 to make requests.
  • Talos controlplane use its own l2-loadbalancer (VIP).
  • Worker nodes make connections to the VIP address in each own region. If a current region does not have a control plane, workers will connect to another region VIP.

Kubernetes addons

Upload the talos image

Create the config file images/terraform.tfvars and add params.

# Body of images/terraform.tfvars

# Regions to use
regions          = ["GRA7", "GRA9"]
cd images
wget https://github.com/siderolabs/talos/releases/download/v1.0.5/openstack-amd64.tar.gz
tar -xzf openstack-amd64.tar.gz

terraform init && terraform apply

Prepare network

Create the config file prepare/terraform.tfvars and add params.

# Body of prepare/terraform.tfvars

# Regions to use
regions          = ["GRA7", "GRA9"]
make create-network

Prepare configs

Generate the default talos config

make create-templates

Create the config file terraform.tfvars and add params.

# Body of terraform.tfvars

# OCCM Credits
ccm_username = "openstack-username"
ccm_password = "openstack-password"

# Number of kubernetes controlplane by zones
controlplane = {
  "GRA9" = {
    count = 1,
    type  = "d2-4",
  },
}

# Number of kubernetes nodes by zones
instances = {
  "GRA9" = {
    web_count            = 1,
    web_instance_type    = "d2-2",
    worker_count         = 1,
    worker_instance_type = "d2-2"
  },
}

Bootstrap controlplane

make create-controlplane

Download configs

make create-kubeconfig

Deploy all other instances

make create-infrastructure

Known Issues

  • OCCM: Openstack cloud controller manage does not work well with zones. It will delete nodes from another zone (because it cannot find the node in the cloud provider). References:

    Solution:

    Use OCCM from my fork. You can run many occm with different key --leader-elect-resource-name=cloud-controller-manager-$region

    It creates the ProviderID with region name inside, such openstack://$region/$id

  • CSI: Openstack cinder cannot work in different zones. You need to install two o more daemonsets for each zone.

  • NodeAutoscaller can work only with openstack magnum. Unfortunately I do not have it.