Deploying Kubernetes Web Servers to Digital Ocean with TLS and Terraform
What a title — so filled with keywords.
I run a number of different websites on my own personal infrastructure, and over the years I’ve found Kubernetes an effective way to manage multiple sites without having to worry about how they’re running. The auto-scaling and self-healing aspects of Kubernetes — for me — are too good to pass up and well worth any initial complexity associated with setting up a cluster. I want my websites to run themselves so I can run to the nearest patio serving nachos.
If this is your first time working with Kubernetes, doing something as simple as hosting a website can be quite the challenge. So with that in mind, I’m gonna walk you through setting up a Kubernetes Web Server (with TLS) on Digital Ocean using the infrastructure-as-code tool Terraform.
Terraform: The Tool, The Myth, The Legend
If you’re not familiar with Terraform, it’s an infrastructure management tool which allows you to provision cloud resources with lines of code. It can be a bit overwhelming if you’re just getting into infrastructure, but if someone’s already written it for you, it’s a great way to provision a Kubernetes cluster.
What You’ll Need
For this tutorial you’re gonna need the following:
- A digital ocean account and an API token
- Terraform installed on your computer
- A domain, connected to Digital Ocean’s name servers
- A willingness to read tutorials with poorly written jokes
The Code
If you’re the kind of person who just wants the code, you can snag it from here. If you wanna know how it works, read on.
Call me by your Main.tf
The first thing we’re going to do is create a file called main.tf and populate it with the following:
What we’re doing here is defining three terraform providers; digital ocean, Kubernetes, and helm (a Kubernetes package manager). These providers let us access their respective resources in downstream files.
Variables
Next we create another file called variables.tf and populate it with the following:
I’ve tried to describe them as best I can. Feel free to place your variables directly into this file as defaults, as a .tfvars file, or into the command line when you plan and apply these changes.
The one to highlight here is top_level_domains. I have mine populated with the following (with example.com replaced with my own domain):
- tacos.tutorial.example.com
- nachos.tutorial.example.com
For each one of these subdomains you provide, the script will provision a separate Kubernetes deployment with a separate TLS certificate.
Building The Cluster
The next file of interest is cluster.tf, in which we are defining the type of Kubernetes cluster we want to create.
We’re keeping this cluster pretty simple, with only one node pool with default 1 or 2 nodes. If you’re new to Kubernetes, a node is the underlying machine that is a part of the Kubernetes cluster.
On most cloud services these are referred to as Virtual Machines, Compute Instances, or EC2s, but on Digital Ocean they’re referred to as Droplets.
This script provisions a control plane and a series of nodes. The only thing you’ll be able to see in your cloud.digitalocean.com console are the droplets and the cluster. Everything else is hidden (which is ultimately for the best).
The Load Balancer
This is going to handle all of our traffic for both websites we’re using in the example. If you’re doing this professionally, I’d recommend you provision an independent load balancer for each service. If like me you’re not made of money, your sites will need to share.
We provision the load balancer separately outside of Kubernetes. By defining a Kubernetes ingress with a service of Type “Load Balancer” technically it will create one for us, but trying to grab its IP address with terraform is like herding cats.
Instead, we can provision the load balancer separately, assign it to the Kubernetes ingress down the line, and snag its IP address in a reliable way to provision DNS records.
In the load_balancer.tf file, you’ll find something like this:
We define a temporary forwarding rule, since it’s necessary to create the load balancer, but using ignore_changes we ignore any modifications made to it by the Kubernetes cluster.
DNS Records: A Software Engineer’s Favourite Thing
With our load balancer configured, we can create DNS records. Once again, you do need to be using Digital Ocean’s name servers for this to function properly.
For each domain defined in the top_level_domains variable, we’ll create a domain resource, an A record pointing to the load balancer, and a CNAME for the www version of the site.
On-Cluster Deployments
Now that the cluster is up and running, we can provision a few deployments and services that will do the web hosting. I’m using a Nginx demo for each site, but you can replace it with any docker image of your choosing.
In reality, I don’t manage my on-cluster resources (deployments, services, volume claims, cron jobs, etc.) through Terraform. I like to keep them as standard Kubernetes YAML manifests with the code of each project. For a demonstration however, these terraform-defined deployments and services will work fine.
This cluster_resources.tf file defines a service and a deployment for each domain you provided in variables.
I need to speak to your Certificate Manager
There are a few ways we can handle TLS termination, but I’m going with an on-cluster method using cert-manager and Lets Encrypt to generate our TLS certificates.
Now that our DNS records are ready, we can use the following certificate_manager.tf file to set up everything certificate related.
The first resource is a helm chart from JetStack, but the second is something home grown. See the repository for specifics, but we’re essentially setting up two ClusterIssuer objects which can provision staging (fake) and production (real) TLS certificates for us using Let’s Encrypt.
Ingress: The Final Frontier
Our final file is the ingress.tf file, which takes care of provisioning the Nginx-ingress controller as well as our ingress rules.
Traffic forwarded from the load balancer is picked up by the Nginx ingress controller, and routed based on the host (nachos.tutorial.example.com or tacos.tutorial.example.com) to the respective service.
Putting It All Together
Time to deploy our code with a few easy commands.
Once you’ve cloned the repository, descend into the terraform directory:
cd terraform
and initialize terraform
terraform init
Once that’s finished running, and your variables are all set, we can run the following
terraform apply
This command will plan out the steps to take and check with you first to make sure you’re okay with them. Fair warning, provisioning these resources will cost you money, but leaving them up for about an hour should only be a few cents.
Once you type “yes” and hit enter, your resources will start to create. This will generally take about 20 minutes, so make yourself a plate of nachos (this step is essential to creating a successful cluster).
In your Digital Ocean console, you’ll begin to see resources being created and connected together.
If all goes successfully, you’ll be faced with the following message from terraform.
You can now visit the two domains you created in your browser. With any luck, you’ll see nginx demo pages on fully secured domains.
To tear down the resources you just created you can run the following:
terraform destroy
And that’s it! Hopefully this helps unravel a little of the complexity behind some of these tools, and hopefully I’ve persuaded you to try Kubernetes, Terraform, and a plate of nachos.