This is the multi-page printable view of this section. Click here to print.
Cloud Platforms
- 1: AWS
- 2: Azure
- 3: DigitalOcean
- 4: GCP
1 - AWS
Creating a Cluster via the AWS CLI
In this guide we will create an HA Kubernetes cluster with 3 worker nodes. We assume an existing VPC, and some familiarity with AWS. If you need more information on AWS specifics, please see the official AWS documentation.
Create the Subnet
aws ec2 create-subnet \
--region $REGION \
--vpc-id $VPC \
--cidr-block ${CIDR_BLOCK}
Create the AMI
Prepare the Import Prerequisites
Create the S3 Bucket
aws s3api create-bucket \
--bucket $BUCKET \
--create-bucket-configuration LocationConstraint=$REGION \
--acl private
Create the vmimport
Role
In order to create an AMI, ensure that the vmimport
role exists as described in the official AWS documentation.
Note that the role should be associated with the S3 bucket we created above.
Create the Image Snapshot
First, download the AWS image from a Talos release:
curl -LO https://github.com/talos-systems/talos/releases/latest/download/aws-amd64.tar.gz | tar -xv
Copy the RAW disk to S3 and import it as a snapshot:
aws s3 cp disk.raw s3://$BUCKET/talos-aws-tutorial.raw
aws ec2 import-snapshot \
--region $REGION \
--description "Talos kubernetes tutorial" \
--disk-container "Format=raw,UserBucket={S3Bucket=$BUCKET,S3Key=talos-aws-tutorial.raw}"
Save the SnapshotId
, as we will need it once the import is done.
To check on the status of the import, run:
aws ec2 describe-import-snapshot-tasks \
--region $REGION \
--import-task-ids
Once the SnapshotTaskDetail.Status
indicates completed
, we can register the image.
Register the Image
aws ec2 register-image \
--region $REGION \
--block-device-mappings "DeviceName=/dev/xvda,VirtualName=talos,Ebs={DeleteOnTermination=true,SnapshotId=$SNAPSHOT,VolumeSize=4,VolumeType=gp2}" \
--root-device-name /dev/xvda \
--virtualization-type hvm \
--architecture x86_64 \
--ena-support \
--name talos-aws-tutorial-ami
We now have an AMI we can use to create our cluster. Save the AMI ID, as we will need it when we create EC2 instances.
Create a Security Group
aws ec2 create-security-group \
--region $REGION \
--group-name talos-aws-tutorial-sg \
--description "Security Group for EC2 instances to allow ports required by Talos"
Using the security group ID from above, allow all internal traffic within the same security group:
aws ec2 authorize-security-group-ingress \
--region $REGION \
--group-name talos-aws-tutorial-sg \
--protocol all \
--port 0 \
--group-id $SECURITY_GROUP \
--source-group $SECURITY_GROUP
and expose the Talos and Kubernetes APIs:
aws ec2 authorize-security-group-ingress \
--region $REGION \
--group-name talos-aws-tutorial-sg \
--protocol tcp \
--port 6443 \
--cidr 0.0.0.0/0 \
--group-id $SECURITY_GROUP
aws ec2 authorize-security-group-ingress \
--region $REGION \
--group-name talos-aws-tutorial-sg \
--protocol tcp \
--port 50000-50001 \
--cidr 0.0.0.0/0 \
--group-id $SECURITY_GROUP
Create a Load Balancer
aws elbv2 create-load-balancer \
--region $REGION \
--name talos-aws-tutorial-lb \
--type network --subnets $SUBNET
Take note of the DNS name and ARN. We will need these soon.
Create the Machine Configuration Files
Generating Base Configurations
Using the DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-k8s-aws-tutorial https://<load balancer IP or DNS>:<port>
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig
At this point, you can modify the generated configs to your liking.
Validate the Configuration Files
$ talosctl validate --config init.yaml --mode cloud
init.yaml is valid for cloud mode
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config join.yaml --mode cloud
join.yaml is valid for cloud mode
Create the EC2 Instances
Note: There is a known issue that prevents Talos from running on T2 instance types. Please use T3 if you need burstable instance types.
Create the Bootstrap Node
aws ec2 run-instances \
--region $REGION \
--image-id $AMI \
--count 1 \
--instance-type t3.small \
--user-data file://init.yaml \
--subnet-id $SUBNET \
--security-group-ids $SECURITY_GROUP \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-cp-0}]"
Create the Remaining Control Plane Nodes
CP_COUNT=1
while [[ "$CP_COUNT" -lt 3 ]]; do
aws ec2 run-instances \
--region $REGION \
--image-id $AMI \
--count 1 \
--instance-type t3.small \
--user-data file://controlplane.yaml \
--subnet-id $SUBNET \
--security-group-ids $SECURITY_GROUP \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-cp-$CP_COUNT}]"
((CP_COUNT++))
done
Make a note of the resulting
PrivateIpAddress
from the init and controlplane nodes for later use.
Create the Worker Nodes
aws ec2 run-instances \
--region $REGION \
--image-id $AMI \
--count 3 \
--instance-type t3.small \
--user-data file://join.yaml \
--subnet-id $SUBNET \
--security-group-ids $SECURITY_GROUP
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-worker}]"
Configure the Load Balancer
aws elbv2 create-target-group \
--region $REGION \
--name talos-aws-tutorial-tg \
--protocol TCP \
--port 6443 \
--vpc-id $VPC
Now, using the target group’s ARN, and the PrivateIpAddress from the instances that you created :
aws elbv2 register-targets \
--region $REGION \
--target-group-arn $TARGET_GROUP_ARN \
--targets Id=$CP_NODE_1_IP Id=$CP_NODE_2_IP Id=$CP_NODE_3_IP
Using the ARNs of the load balancer and target group from previous steps, create the listener:
aws elbv2 create-listener \
--region $REGION \
--load-balancer-arn $LOAD_BALANCER_ARN \
--protocol TCP \
--port 443 \
--default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_ARN
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
talosctl --talosconfig talosconfig kubeconfig .
2 - Azure
Creating a Cluster via the CLI
In this guide we will create an HA Kubernetes cluster with 1 worker node. We assume existing Blob Storage, and some familiarity with Azure. If you need more information on Azure specifics, please see the official Azure documentation.
Environment Setup
We’ll make use of the following environment variables throughout the setup. Edit the variables below with your correct information.
# Storage account to use
export STORAGE_ACCOUNT="StorageAccountName"
# Storage container to upload to
export STORAGE_CONTAINER="StorageContainerName"
# Resource group name
export GROUP="ResourceGroupName"
# Location
export LOCATION="centralus"
# Get storage account connection string based on info above
export CONNECTION=$(az storage account show-connection-string \
-n $STORAGE_ACCOUNT \
-g $GROUP \
-o tsv)
Create the Image
First, download the Azure image from a Talos release.
Once downloaded, untar with tar -xvf /path/to/azure-amd64.tar.gz
Upload the VHD
Once you have pulled down the image, you can upload it to blob storage with:
az storage blob upload \
--connection-string $CONNECTION \
--container-name $STORAGE_CONTAINER \
-f /path/to/extracted/talos-azure.vhd \
-n talos-azure.vhd
Register the Image
Now that the image is present in our blob storage, we’ll register it.
az image create \
--name talos \
--source https://$STORAGE_ACCOUNT.blob.core.windows.net/$STORAGE_CONTAINER/talos-azure.vhd \
--os-type linux \
-g $GROUP
Network Infrastructure
Virtual Networks and Security Groups
Once the image is prepared, we’ll want to work through setting up the network. Issue the following to create a network security group and add rules to it.
# Create vnet
az network vnet create \
--resource-group $GROUP \
--location $LOCATION \
--name talos-vnet \
--subnet-name talos-subnet
# Create network security group
az network nsg create -g $GROUP -n talos-sg
# Client -> apid
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n apid \
--priority 1001 \
--destination-port-ranges 50000 \
--direction inbound
# Trustd
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n trustd \
--priority 1002 \
--destination-port-ranges 50001 \
--direction inbound
# etcd
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n etcd \
--priority 1003 \
--destination-port-ranges 2379-2380 \
--direction inbound
# Kubernetes API Server
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n kube \
--priority 1004 \
--destination-port-ranges 6443 \
--direction inbound
Load Balancer
We will create a public ip, load balancer, and a health check that we will use for our control plane.
# Create public ip
az network public-ip create \
--resource-group $GROUP \
--name talos-public-ip \
--allocation-method static
# Create lb
az network lb create \
--resource-group $GROUP \
--name talos-lb \
--public-ip-address talos-public-ip \
--frontend-ip-name talos-fe \
--backend-pool-name talos-be-pool
# Create health check
az network lb probe create \
--resource-group $GROUP \
--lb-name talos-lb \
--name talos-lb-health \
--protocol tcp \
--port 6443
# Create lb rule for 6443
az network lb rule create \
--resource-group $GROUP \
--lb-name talos-lb \
--name talos-6443 \
--protocol tcp \
--frontend-ip-name talos-fe \
--frontend-port 6443 \
--backend-pool-name talos-be-pool \
--backend-port 6443 \
--probe-name talos-lb-health
Network Interfaces
In Azure, we have to pre-create the NICs for our control plane so that they can be associated with our load balancer.
for i in $( seq 0 1 2 ); do
# Create public IP for each nic
az network public-ip create \
--resource-group $GROUP \
--name talos-controlplane-public-ip-$i \
--allocation-method static
# Create nic
az network nic create \
--resource-group $GROUP \
--name talos-controlplane-nic-$i \
--vnet-name talos-vnet \
--subnet talos-subnet \
--network-security-group talos-sg \
--public-ip-address talos-controlplane-public-ip-$i\
--lb-name talos-lb \
--lb-address-pools talos-be-pool
done
Cluster Configuration
With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.
LB_PUBLIC_IP=$(az network public-ip show \
--resource-group $GROUP \
--name talos-public-ip \
--query [ipAddress] \
--output tsv)
talosctl gen config talos-k8s-azure-tutorial https://${LB_PUBLIC_IP}:6443
Compute Creation
We are now ready to create our azure nodes.
# Create availability set
az vm availability-set create \
--name talos-controlplane-av-set \
-g $GROUP
# Create controlplane 0
az vm create \
--name talos-controlplane-0 \
--image talos \
--custom-data ./init.yaml \
-g $GROUP \
--admin-username talos \
--generate-ssh-keys \
--verbose \
--boot-diagnostics-storage $STORAGE_ACCOUNT \
--os-disk-size-gb 20 \
--nics talos-controlplane-nic-0 \
--availability-set talos-controlplane-av-set \
--no-wait
# Create 2 more controlplane nodes
for i in $( seq 1 2 ); do
az vm create \
--name talos-controlplane-$i \
--image talos \
--custom-data ./controlplane.yaml \
-g $GROUP \
--admin-username talos \
--generate-ssh-keys \
--verbose \
--boot-diagnostics-storage $STORAGE_ACCOUNT \
--os-disk-size-gb 20 \
--nics talos-controlplane-nic-$i \
--availability-set talos-controlplane-av-set \
--no-wait
done
# Create worker node
az vm create \
--name talos-worker-0 \
--image talos \
--vnet-name talos-vnet \
--subnet talos-subnet \
--custom-data ./join.yaml \
-g $GROUP \
--admin-username talos \
--generate-ssh-keys \
--verbose \
--boot-diagnostics-storage $STORAGE_ACCOUNT \
--nsg talos-sg \
--os-disk-size-gb 20 \
--no-wait
# NOTES:
# `--admin-username` and `--generate-ssh-keys` are required by the az cli,
# but are not actually used by talos
# `--os-disk-size-gb` is the backing disk for Kubernetes and any workload containers
# `--boot-diagnostics-storage` is to enable console output which may be necessary
# for troubleshooting
Retrieve the kubeconfig
You should now be able to interact with your cluster with talosctl
.
We will need to discover the public IP for our first control plane node first.
CONTROL_PLANE_0_IP=$(az network public-ip show \
--resource-group $GROUP \
--name talos-controlplane-public-ip-0 \
--query [ipAddress] \
--output tsv)
talosctl --talosconfig ./talosconfig config endpoint $CONTROL_PLANE_0_IP
talosctl --talosconfig ./talosconfig config node $CONTROL_PLANE_0_IP
talosctl --talosconfig ./talosconfig kubeconfig .
kubectl --kubeconfig ./kubeconfig get nodes
3 - DigitalOcean
Creating a Cluster via the CLI
In this guide we will create an HA Kubernetes cluster with 1 worker node. We assume an existing Space, and some familiarity with DigitalOcean. If you need more information on DigitalOcean specifics, please see the official DigitalOcean documentation.
Create the Image
First, download the DigitalOcean image from a Talos release.
Extract the archive to get the disk.raw
file, compress it using gzip
to disk.raw.gz
.
Using an upload method of your choice (doctl
does not have Spaces support), upload the image to a space.
Now, create an image using the URL of the uploaded image:
doctl compute image create \
--region $REGION \
--image-description talos-digital-ocean-tutorial \
--image-url https://talos-tutorial.$REGION.digitaloceanspaces.com/disk.raw.gz \
Talos
Save the image ID. We will need it when creating droplets.
Create a Load Balancer
doctl compute load-balancer create \
--region $REGION \
--name talos-digital-ocean-tutorial-lb \
--tag-name talos-digital-ocean-tutorial-control-plane \
--health-check protocol:tcp,port:6443,check_interval_seconds:10,response_timeout_seconds:5,healthy_threshold:5,unhealthy_threshold:3 \
--forwarding-rules entry_protocol:tcp,entry_port:443,target_protocol:tcp,target_port:6443
We will need the IP of the load balancer. Using the ID of the load balancer, run:
doctl compute load-balancer get --format IP <load balancer ID>
Save it, as we will need it in the next step.
Create the Machine Configuration Files
Generating Base Configurations
Using the DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-k8s-digital-ocean-tutorial https://<load balancer IP or DNS>:<port>
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig
At this point, you can modify the generated configs to your liking.
Validate the Configuration Files
$ talosctl validate --config init.yaml --mode cloud
init.yaml is valid for cloud mode
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config join.yaml --mode cloud
join.yaml is valid for cloud mode
Create the Droplets
Create the Bootstrap Node
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--tag-names talos-digital-ocean-tutorial-control-plane \
--user-data-file init.yaml \
--ssh-keys <ssh key fingerprint> \
talos-control-plane-1
Note: Although SSH is not used by Talos, DigitalOcean still requires that an SSH key be associated with the droplet. Create a dummy key that can be used to satisfy this requirement.
Create the Remaining Control Plane Nodes
Run the following twice, to give ourselves three total control plane nodes:
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--tag-names talos-digital-ocean-tutorial-control-plane \
--user-data-file controlplane.yaml \
--ssh-keys <ssh key fingerprint> \
talos-control-plane-2
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--tag-names talos-digital-ocean-tutorial-control-plane \
--user-data-file controlplane.yaml \
--ssh-keys <ssh key fingerprint> \
talos-control-plane-3
Create the Worker Nodes
Run the following to create a worker node:
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--user-data-file join.yaml \
--ssh-keys <ssh key fingerprint> \
talos-worker-1
Retrieve the kubeconfig
To configure talosctl
we will need the first control plane node’s IP:
doctl compute droplet get --format PublicIPv4 <droplet ID>
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
talosctl --talosconfig talosconfig kubeconfig .
4 - GCP
Creating a Cluster via the CLI
In this guide, we will create an HA Kubernetes cluster in GCP with 1 worker node. We will assume an existing Cloud Storage bucket, and some familiarity with Google Cloud. If you need more information on Google Cloud specifics, please see the official Google documentation.
Environment Setup
We’ll make use of the following environment variables throughout the setup. Edit the variables below with your correct information.
# Storage account to use
export STORAGE_BUCKET="StorageBucketName"
# Region
export REGION="us-central1"
Create the Image
First, download the Google Cloud image from a Talos release.
These images are called gcp-$ARCH.tar.gz
.
Upload the Image
Once you have downloaded the image, you can upload it to your storage bucket with:
gsutil cp /path/to/gcp-amd64.tar.gz gs://$STORAGE_BUCKET
Register the image
Now that the image is present in our bucket, we’ll register it.
gcloud compute images create talos \
--source-uri=gs://$STORAGE_BUCKET/gcp-amd64.tar.gz \
--guest-os-features=VIRTIO_SCSI_MULTIQUEUE
Network Infrastructure
Load Balancers and Firewalls
Once the image is prepared, we’ll want to work through setting up the network. Issue the following to create a firewall, load balancer, and their required components.
# Create Instance Group
gcloud compute instance-groups unmanaged create talos-ig \
--zone $REGION-b
# Create port for IG
gcloud compute instance-groups set-named-ports talos-ig \
--named-ports tcp6443:6443 \
--zone $REGION-b
# Create health check
gcloud compute health-checks create tcp talos-health-check --port 6443
# Create backend
gcloud compute backend-services create talos-be \
--global \
--protocol TCP \
--health-checks talos-health-check \
--timeout 5m \
--port-name tcp6443
# Add instance group to backend
gcloud compute backend-services add-backend talos-be \
--global \
--instance-group talos-ig \
--instance-group-zone $REGION-b
# Create tcp proxy
gcloud compute target-tcp-proxies create talos-tcp-proxy \
--backend-service talos-be \
--proxy-header NONE
# Create LB IP
gcloud compute addresses create talos-lb-ip --global
# Forward 443 from LB IP to tcp proxy
gcloud compute forwarding-rules create talos-fwd-rule \
--global \
--ports 443 \
--address talos-lb-ip \
--target-tcp-proxy talos-tcp-proxy
# Create firewall rule for health checks
gcloud compute firewall-rules create talos-controlplane-firewall \
--source-ranges 130.211.0.0/22,35.191.0.0/16 \
--target-tags talos-controlplane \
--allow tcp:6443
# Create firewall rule to allow talosctl access
gcloud compute firewall-rules create talos-controlplane-talosctl \
--source-ranges 0.0.0.0/0 \
--target-tags talos-controlplane \
--allow tcp:50000
Cluster Configuration
With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.
LB_PUBLIC_IP=$(gcloud compute forwarding-rules describe talos-fwd-rule \
--global \
--format json \
| jq -r .IPAddress)
talosctl gen config talos-k8s-gcp-tutorial https://${LB_PUBLIC_IP}:443
Compute Creation
We are now ready to create our GCP nodes.
# Create control plane 0
gcloud compute instances create talos-controlplane-0 \
--image talos \
--zone $REGION-b \
--tags talos-controlplane \
--boot-disk-size 20GB \
--metadata-from-file=user-data=./init.yaml
# Create control plane 1/2
for i in $( seq 1 2 ); do
gcloud compute instances create talos-controlplane-$i \
--image talos \
--zone $REGION-b \
--tags talos-controlplane \
--boot-disk-size 20GB \
--metadata-from-file=user-data=./controlplane.yaml
done
# Add control plane nodes to instance group
for i in $( seq 0 1 2 ); do
gcloud compute instance-groups unmanaged add-instances talos-ig \
--zone $REGION-b \
--instances talos-controlplane-$i
done
# Create worker
gcloud compute instances create talos-worker-0 \
--image talos \
--zone $REGION-b \
--boot-disk-size 20GB \
--metadata-from-file=user-data=./join.yaml
Retrieve the kubeconfig
You should now be able to interact with your cluster with talosctl
.
We will need to discover the public IP for our first control plane node first.
CONTROL_PLANE_0_IP=$(gcloud compute instances describe talos-controlplane-0 \
--zone $REGION-b \
--format json \
| jq -r '.networkInterfaces[0].accessConfigs[0].natIP')
talosctl --talosconfig ./talosconfig config endpoint $CONTROL_PLANE_0_IP
talosctl --talosconfig ./talosconfig config node $CONTROL_PLANE_0_IP
talosctl --talosconfig ./talosconfig kubeconfig .
kubectl --kubeconfig ./kubeconfig get nodes