Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated to Terraform Client 0.7 Updated docs #176

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -20,3 +20,4 @@ _projects
_steps
_book
node_modules
npm-debug.log
13 changes: 12 additions & 1 deletion docs/getting-started-guides/aws/public.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,15 +70,26 @@ To install the role dependencies for Ansible execute:
cd /tmp/kubeform
ansible-galaxy install -r requirements.yml
```
Ensure that an ssh server is running on port 22 on your development machine.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed?


To run the Ansible playbook (to configure the cluster):

```
ansible-playbook -u core --ssh-common-args="-i /tmp/kubeform/terraform/aws/public-cloud/id_rsa -q" --inventory-file=inventory site.yml -e kube_apiserver_vip=$(cd /tmp/kubeform/terraform/aws/public-cloud && terraform output master_elb_hostname)
ansible-playbook -u core --private-key=/tmp/kubeform/terraform/aws/public-cloud/id_rsa --inventory-file=inventory site.yml -e kube_apiserver_vip=$(cd /tmp/kubeform/terraform/aws/public-cloud && terraform output master_elb_hostname)
```

This will run the playbook (using the credentials output by terraform and the terraform state as a dynamic inventory) and inject the AWS ELB (for the master API servers) address as a variable ```kube_apiserver_vip```.

###?| Log in

Once you're set up, you can log into the dashboard by running:

```
kubectl proxy
```

and visiting the following URL [http://localhost:8001/ui](http://localhost:8001/ui)

## Cluster Destroy

```
Expand Down
16 changes: 14 additions & 2 deletions docs/getting-started-guides/digitalocean.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The cluster is provisioned in separate stages as follows:
## Prerequisites

1. You need a Digitalocean account. Visit [https://cloud.digitalocean.com/registrations/new](https://cloud.digitalocean.com/registrations/new) to get started
2. You need to have installed and configured Terraform (>= 0.6.16 recommended). Visit [https://www.terraform.io/intro/getting-started/install.html](https://www.terraform.io/intro/getting-started/install.html) to get started.
2. You need to have installed and configured Terraform (>= 0.7.13 recommended). Visit [https://www.terraform.io/intro/getting-started/install.html](https://www.terraform.io/intro/getting-started/install.html) to get started.
3. You need to have [Python](https://www.python.org/) >= 2.7.5 installed along with [pip](https://pip.pypa.io/en/latest/installing.html).
4. Kubectl installed in and your PATH:

Expand Down Expand Up @@ -68,14 +68,26 @@ cd /tmp/kubeform
ansible-galaxy install -r requirements.yml
```

Ensure that an ssh server is running on port 22 on your development machine.

To run the Ansible playbook (to configure the cluster):

```
ansible-playbook -u core --ssh-common-args="-i /tmp/kubeform/terraform/digitalocean/id_rsa -q" --inventory-file=inventory site.yml
ansible-playbook -u core --private-key=/tmp/kubeform/terraform/digitalocean/id_rsa --inventory-file=inventory site.yml
```

This will run the playbook (using the credentials output by terraform and the terraform state as a dynamic inventory).

###?| Log in

Once you're set up, you can log into the dashboard by running:

```
kubectl proxy
```

and visiting the following URL [http://localhost:8001/ui](http://localhost:8001/ui)

## Cluster Destroy

```
Expand Down
9 changes: 9 additions & 0 deletions docs/getting-started-guides/docker-compose.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,15 @@ KUBERNETES_VERSION=1.2.4 docker-compose up -d

replace with the relevant Kubernetes version.

###?| Troubleshooting

When working with Docker for mac on OSX, you may find that you need to replace two lines in the docker-compose.yml with the following:

```
- /private/var/lib/docker/:/var/lib/docker:ro
- /private/var/lib/kubelet/:/var/lib/kubelet:rw/
```

## Cluster Destroy

```
Expand Down
5 changes: 5 additions & 0 deletions inventory/inventory
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
[role=masters]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this here?

178.62.104.128
178.62.105.27
178.62.105.128

[role=workers]
178.62.104.48

[role=edge-routers]
178.62.104.231

[masters:children]
role=masters
Expand Down
2 changes: 1 addition & 1 deletion terraform/digitalocean/etcd_discovery_url.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@

https://discovery.etcd.io/1b1e5527389535df10ed556cd6cca18c
32 changes: 16 additions & 16 deletions terraform/digitalocean/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ resource "template_file" "etcd_discovery_url" {
}

module "ca" {
source = "github.com/Capgemini/tf_tls//ca"
source = "github.com/mattcoffey/tf_tls/ca"
Copy link
Contributor

@enxebre enxebre Dec 15, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you point the modules to github.com/tamsky/tf_tls/ for consistency which is already being used for aws and supports terraform >= 0.7.3 then we can move all back in one go to Cap

organization = "${var.organization}"
ca_count = "${var.masters + var.workers + var.edge-routers}"
deploy_ssh_hosts = "${concat(digitalocean_droplet.edge-router.*.ipv4_address, concat(digitalocean_droplet.master.*.ipv4_address, digitalocean_droplet.worker.*.ipv4_address))}"
Expand All @@ -62,12 +62,12 @@ module "etcd_cert" {
}

module "kube_master_certs" {
source = "github.com/Capgemini/tf_tls/kubernetes/master"
source = "github.com/mattcoffey/tf_tls/kubernetes/master"
ca_cert_pem = "${module.ca.ca_cert_pem}"
ca_private_key_pem = "${module.ca.ca_private_key_pem}"
ip_addresses = "${compact(digitalocean_droplet.master.*.ipv4_address)}"
deploy_ssh_hosts = "${compact(digitalocean_droplet.master.*.ipv4_address)}"
dns_names = "test"
dns_names = ["test"]
master_count = "${var.masters}"
validity_period_hours = "8760"
early_renewal_hours = "720"
Expand All @@ -76,7 +76,7 @@ module "kube_master_certs" {
}

module "kube_kubelet_certs" {
source = "github.com/Capgemini/tf_tls/kubernetes/kubelet"
source = "github.com/mattcoffey/tf_tls/kubernetes/kubelet"
ca_cert_pem = "${module.ca.ca_cert_pem}"
ca_private_key_pem = "${module.ca.ca_private_key_pem}"
ip_addresses = "${concat( digitalocean_droplet.edge-router.*.ipv4_address, concat(digitalocean_droplet.master.*.ipv4_address, digitalocean_droplet.worker.*.ipv4_address))}"
Expand All @@ -89,14 +89,14 @@ module "kube_kubelet_certs" {
}

module "kube_admin_cert" {
source = "github.com/Capgemini/tf_tls/kubernetes/admin"
source = "github.com/mattcoffey/tf_tls/kubernetes/admin"
ca_cert_pem = "${module.ca.ca_cert_pem}"
ca_private_key_pem = "${module.ca.ca_private_key_pem}"
kubectl_server_ip = "${digitalocean_droplet.master.0.ipv4_address}"
}

module "docker_daemon_certs" {
source = "github.com/Capgemini/tf_tls//docker/daemon"
source = "github.com/mattcoffey/tf_tls/docker/daemon"
ca_cert_pem = "${module.ca.ca_cert_pem}"
ca_private_key_pem = "${module.ca.ca_private_key_pem}"
ip_addresses_list = "${concat(digitalocean_droplet.edge-router.*.ipv4_address, concat(digitalocean_droplet.master.*.ipv4_address, digitalocean_droplet.worker.*.ipv4_address))}"
Expand All @@ -109,7 +109,7 @@ module "docker_daemon_certs" {
}

module "docker_client_certs" {
source = "github.com/Capgemini/tf_tls//docker/client"
source = "github.com/mattcoffey/tf_tls/docker/client"
ca_cert_pem = "${module.ca.ca_cert_pem}"
ca_private_key_pem = "${module.ca.ca_private_key_pem}"
ip_addresses_list = "${concat(digitalocean_droplet.edge-router.*.ipv4_address, concat(digitalocean_droplet.master.*.ipv4_address, digitalocean_droplet.worker.*.ipv4_address))}"
Expand All @@ -128,9 +128,9 @@ resource "template_file" "master_cloud_init" {
etcd_discovery_url = "${file(var.etcd_discovery_url_file)}"
size = "${var.masters}"
region = "${var.region}"
etcd_ca = "${replace(module.ca.ca_cert_pem, \"\n\", \"\\n\")}"
etcd_cert = "${replace(module.etcd_cert.etcd_cert_pem, \"\n\", \"\\n\")}"
etcd_key = "${replace(module.etcd_cert.etcd_private_key, \"\n\", \"\\n\")}"
etcd_ca = "${replace(module.ca.ca_cert_pem, "\n", "\\n")}"
etcd_cert = "${replace(module.etcd_cert.etcd_cert_pem, "\n", "\\n")}"
etcd_key = "${replace(module.etcd_cert.etcd_private_key, "\n", "\\n")}"
}
}

Expand All @@ -141,9 +141,9 @@ resource "template_file" "worker_cloud_init" {
etcd_discovery_url = "${file(var.etcd_discovery_url_file)}"
size = "${var.masters}"
region = "${var.region}"
etcd_ca = "${replace(module.ca.ca_cert_pem, \"\n\", \"\\n\")}"
etcd_cert = "${replace(module.etcd_cert.etcd_cert_pem, \"\n\", \"\\n\")}"
etcd_key = "${replace(module.etcd_cert.etcd_private_key, \"\n\", \"\\n\")}"
etcd_ca = "${replace(module.ca.ca_cert_pem, "\n", "\\n")}"
etcd_cert = "${replace(module.etcd_cert.etcd_cert_pem, "\n", "\\n")}"
etcd_key = "${replace(module.etcd_cert.etcd_private_key, "\n", "\\n")}"
}
}

Expand All @@ -154,9 +154,9 @@ resource "template_file" "edge-router_cloud_init" {
etcd_discovery_url = "${file(var.etcd_discovery_url_file)}"
size = "${var.masters}"
region = "${var.region}"
etcd_ca = "${replace(module.ca.ca_cert_pem, \"\n\", \"\\n\")}"
etcd_cert = "${replace(module.etcd_cert.etcd_cert_pem, \"\n\", \"\\n\")}"
etcd_key = "${replace(module.etcd_cert.etcd_private_key, \"\n\", \"\\n\")}"
etcd_ca = "${replace(module.ca.ca_cert_pem, "\n", "\\n")}"
etcd_cert = "${replace(module.etcd_cert.etcd_cert_pem, "\n", "\\n")}"
etcd_key = "${replace(module.etcd_cert.etcd_private_key, "\n", "\\n")}"
}
}

Expand Down