This is the multi-page printable view of this section. Click here to print.
Talos Linux Guides
- 1: Installation
- 1.1: Bare Metal Platforms
- 1.1.1: Digital Rebar
- 1.1.2: Equinix Metal
- 1.1.3: ISO
- 1.1.4: Matchbox
- 1.1.5: Network Configuration
- 1.1.6: PXE
- 1.1.7: Sidero Metal
- 1.2: Virtualized Platforms
- 1.3: Cloud Platforms
- 1.3.1: AWS
- 1.3.2: Azure
- 1.3.3: DigitalOcean
- 1.3.4: Exoscale
- 1.3.5: GCP
- 1.3.6: Hetzner
- 1.3.7: Nocloud
- 1.3.8: OpenStack
- 1.3.9: Oracle
- 1.3.10: Scaleway
- 1.3.11: UpCloud
- 1.3.12: Vultr
- 1.4: Local Platforms
- 1.4.1: Docker
- 1.4.2: QEMU
- 1.4.3: VirtualBox
- 1.5: Single Board Computers
- 1.5.1: Banana Pi M64
- 1.5.2: Friendlyelec Nano PI R4S
- 1.5.3: Jetson Nano
- 1.5.4: Libre Computer Board ALL-H3-CC
- 1.5.5: Pine64
- 1.5.6: Pine64 Rock64
- 1.5.7: Radxa ROCK PI 4
- 1.5.8: Radxa ROCK PI 4C
- 1.5.9: Raspberry Pi 4 Model B
- 1.5.10: Raspberry Pi Series
- 2: Configuration
- 2.1: Configuration Patches
- 2.2: Containerd
- 2.3: Custom Certificate Authorities
- 2.4: Disk Encryption
- 2.5: Editing Machine Configuration
- 2.6: Logging
- 2.7: Managing Talos PKI
- 2.8: NVIDIA Fabric Manager
- 2.9: NVIDIA GPU (OSS drivers)
- 2.10: NVIDIA GPU (Proprietary drivers)
- 2.11: Pull Through Image Cache
- 2.12: Role-based access control (RBAC)
- 2.13: System Extensions
- 3: How Tos
- 3.1: How to enable workers on your control plane nodes
- 3.2: How to scale down a Talos cluster
- 3.3: How to scale up a Talos cluster
- 4: Network
- 4.1: Corporate Proxies
- 4.2: Network Device Selector
- 4.3: Virtual (shared) IP
- 4.4: Wireguard Network
- 5: Discovery Service
- 6: Interactive Dashboard
- 7: Resetting a Machine
- 8: Upgrading Talos Linux
1 - Installation
1.1 - Bare Metal Platforms
1.1.1 - Digital Rebar
Prerequisites
- 3 nodes (please see hardware requirements)
- Loadbalancer
- Digital Rebar Server
- Talosctl access (see talosctl setup)
Creating a Cluster
In this guide we will create an Kubernetes cluster with 1 worker node, and 2 controlplane nodes. We assume an existing digital rebar deployment, and some familiarity with iPXE.
We leave it up to the user to decide if they would like to use static networking, or DHCP. The setup and configuration of DHCP will not be covered.
Create the Machine Configuration Files
Generating Base Configurations
Using the DNS name of the load balancer, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-k8s-metal-tutorial https://<load balancer IP or DNS>:<port>
created controlplane.yaml
created worker.yaml
created talosconfig
The loadbalancer is used to distribute the load across multiple controlplane nodes. This isn’t covered in detail, because we assume some loadbalancing knowledge before hand. If you think this should be added to the docs, please create a issue.
At this point, you can modify the generated configs to your liking.
Optionally, you can specify --config-patch
with RFC6902 jsonpatch which will be applied during the config generation.
Validate the Configuration Files
$ talosctl validate --config controlplane.yaml --mode metal
controlplane.yaml is valid for metal mode
$ talosctl validate --config worker.yaml --mode metal
worker.yaml is valid for metal mode
Publishing the Machine Configuration Files
Digital Rebar has a built-in fileserver, which means we can use this feature to expose the talos configuration files.
We will place controlplane.yaml
, and worker.yaml
into Digital Rebar file server by using the drpcli
tools.
Copy the generated files from the step above into your Digital Rebar installation.
drpcli file upload <file>.yaml as <file>.yaml
Replacing <file>
with controlplane or worker.
Download the boot files
Download a recent version of boot.tar.gz
from github.
Upload to DRB:
$ drpcli isos upload boot.tar.gz as talos.tar.gz
{
"Path": "talos.tar.gz",
"Size": 96470072
}
We have some Digital Rebar example files in the Git repo you can use to provision Digital Rebar with drpcli.
To apply these configs you need to create them, and then apply them as follow:
$ drpcli bootenvs create talos
{
"Available": true,
"BootParams": "",
"Bundle": "",
"Description": "",
"Documentation": "",
"Endpoint": "",
"Errors": [],
"Initrds": [],
"Kernel": "",
"Meta": {},
"Name": "talos",
"OS": {
"Codename": "",
"Family": "",
"IsoFile": "",
"IsoSha256": "",
"IsoUrl": "",
"Name": "",
"SupportedArchitectures": {},
"Version": ""
},
"OnlyUnknown": false,
"OptionalParams": [],
"ReadOnly": false,
"RequiredParams": [],
"Templates": [],
"Validated": true
}
drpcli bootenvs update talos - < bootenv.yaml
You need to do this for all files in the example directory. If you don’t have access to the
drpcli
tools you can also use the webinterface.
It’s important to have a corresponding SHA256 hash matching the boot.tar.gz
Bootenv BootParams
We’re using some of Digital Rebar built in templating to make sure the machine gets the correct role assigned.
talos.platform=metal talos.config={{ .ProvisionerURL }}/files/{{.Param \"talos/role\"}}.yaml"
This is why we also include a params.yaml
in the example directory to make sure the role is set to one of the following:
- controlplane
- worker
The {{.Param \"talos/role\"}}
then gets populated with one of the above roles.
Boot the Machines
In the UI of Digital Rebar you need to select the machines you want to provision. Once selected, you need to assign to following:
- Profile
- Workflow
This will provision the Stage and Bootenv with the talos values. Once this is done, you can boot the machine.
Bootstrap Etcd
To configure talosctl
we will need the first control plane node’s IP:
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
1.1.2 - Equinix Metal
You can create a Talos Linux cluster on Equinix Metal in a variety of ways, such as through the EM web UI, the metal
command line too, or through PXE booting.
Talos Linux is a supported OS install option on Equinix Metal, so it’s an easy process.
Regardless of the method, the process is:
- Create a DNS entry for your Kubernetes endpoint.
- Generate the configurations using
talosctl
. - Provision your machines on Equinix Metal.
- Push the configurations to your servers (if not done as part of the machine provisioning).
- configure your Kubernetes endpoint to point to the newly created control plane nodes
- bootstrap the cluster
Define the Kubernetes Endpoint
There are a variety of ways to create an HA endpoint for the Kubernetes cluster. Some of the ways are:
- DNS
- Load Balancer
- BGP
Whatever way is chosen, it should result in an IP address/DNS name that routes traffic to all the control plane nodes. We do not know the control plane node IP addresses at this stage, but we should define the endpoint DNS entry so that we can use it in creating the cluster configuration. After the nodes are provisioned, we can use their addresses to create the endpoint A records, or bind them to the load balancer, etc.
Create the Machine Configuration Files
Generating Configurations
Using the DNS name of the loadbalancer defined above, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-k8s-em-tutorial https://<load balancer IP or DNS>:<port>
created controlplane.yaml
created worker.yaml
created talosconfig
The
port
used above should be 6443, unless your load balancer maps a different port to port 6443 on the control plane nodes.
Validate the Configuration Files
talosctl validate --config controlplane.yaml --mode metal
talosctl validate --config worker.yaml --mode metal
Note: Validation of the install disk could potentially fail as validation is performed on your local machine and the specified disk may not exist.
Passing in the configuration as User Data
You can use the metadata service provide by Equinix Metal to pass in the machines configuration. It is required to add a shebang to the top of the configuration file.
The convention we use is #!talos
.
Provision the machines in Equinix Metal
Using the Equinix Metal UI
Simply select the location and type of machines in the Equinix Metal web interface.
Select Talos as the Operating System, then select the number of servers to create, and name them (in lowercase only.)
Under optional settings, you can optionally paste in the contents of controlplane.yaml
that was generated, above (ensuring you add a first line of #!talos
).
You can repeat this process to create machines of different types for control plane and worker nodes (although you would pass in worker.yaml
for the worker nodes, as user data).
If you did not pass in the machine configuration as User Data, you need to provide it to each machine, with the following command:
talosctl apply-config --insecure --nodes <Node IP> --file ./controlplane.yaml
Creating a Cluster via the Equinix Metal CLI
This guide assumes the user has a working API token,and the Equinix Metal CLI installed.
Because Talos Linux is a supported operating system, Talos Linux machines can be provisioned directly via the CLI, using the -O talos_v1
parameter (for Operating System).
Note: Ensure you have prepended
#!talos
to thecontrolplane.yaml
file.
metal device create \
--project-id $PROJECT_ID \
--facility $FACILITY \
--operating-system "talos_v1" \
--plan $PLAN\
--hostname $HOSTNAME\
--userdata-file controlplane.yaml
e.g. metal device create -p <projectID> -f da11 -O talos_v1 -P c3.small.x86 -H steve.test.11 --userdata-file ./controlplane.yaml
Repeat this to create each control plane node desired: there should usually be 3 for a HA cluster.
Network Booting via iPXE
You may install Talos over the network using TFTP and iPXE. You would first need a working TFTP and iPXE server.
In general this requires a Talos kernel vmlinuz and initramfs. These assets can be downloaded from a given release.
PXE Boot Kernel Parameters
The following is a list of kernel parameters required by Talos:
talos.platform
: set this toequinixMetal
init_on_alloc=1
: required by KSPPslab_nomerge
: required by KSPPpti=on
: required by KSPP
Create the Control Plane Nodes
metal device create \
--project-id $PROJECT_ID \
--facility $FACILITY \
--ipxe-script-url $PXE_SERVER \
--operating-system "custom_ipxe" \
--plan $PLAN\
--hostname $HOSTNAME\
--userdata-file controlplane.yaml
Note: Repeat this to create each control plane node desired: there should usually be 3 for a HA cluster.
Create the Worker Nodes
metal device create \
--project-id $PROJECT_ID \
--facility $FACILITY \
--ipxe-script-url $PXE_SERVER \
--operating-system "custom_ipxe" \
--plan $PLAN\
--hostname $HOSTNAME\
--userdata-file worker.yaml
Update the Kubernetes endpoint
Now our control plane nodes have been created, and we know their IP addresses, we can associate them with the Kubernetes endpoint.
Configure your load balancer to route traffic to these nodes, or add A
records to your DNS entry for the endpoint, for each control plane node.
e.g.
host endpoint.mydomain.com
endpoint.mydomain.com has address 145.40.90.201
endpoint.mydomain.com has address 147.75.109.71
endpoint.mydomain.com has address 145.40.90.177
Bootstrap Etcd
Set the endpoints
and nodes
for talosctl
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
This only needs to be issued to one control plane node.
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
1.1.3 - ISO
Talos can be installed on bare-metal machine using an ISO image.
ISO images for amd64
and arm64
architectures are available on the Talos releases page.
Talos doesn’t install itself to disk when booted from an ISO until the machine configuration is applied.
Please follow the getting started guide for the generic steps on how to install Talos.
Note: If there is already a Talos installation on the disk, the machine will boot into that installation when booting from a Talos ISO. The boot order should prefer disk over ISO, or the ISO should be removed after the installation to make Talos boot from disk.
See kernel parameters reference for the list of kernel parameters supported by Talos.
1.1.4 - Matchbox
Creating a Cluster
In this guide we will create an HA Kubernetes cluster with 3 worker nodes. We assume an existing load balancer, matchbox deployment, and some familiarity with iPXE.
We leave it up to the user to decide if they would like to use static networking, or DHCP. The setup and configuration of DHCP will not be covered.
Create the Machine Configuration Files
Generating Base Configurations
Using the DNS name of the load balancer, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-k8s-metal-tutorial https://<load balancer IP or DNS>:<port>
created controlplane.yaml
created worker.yaml
created talosconfig
At this point, you can modify the generated configs to your liking.
Optionally, you can specify --config-patch
with RFC6902 jsonpatch which will be applied during the config generation.
Validate the Configuration Files
$ talosctl validate --config controlplane.yaml --mode metal
controlplane.yaml is valid for metal mode
$ talosctl validate --config worker.yaml --mode metal
worker.yaml is valid for metal mode
Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (talos.config
) must be used to inform Talos about where it should retrieve its configuration file.
To keep things simple we will place controlplane.yaml
, and worker.yaml
into Matchbox’s assets
directory.
This directory is automatically served by Matchbox.
Create the Matchbox Configuration Files
The profiles we will create will reference vmlinuz
, and initramfs.xz
.
Download these files from the release of your choice, and place them in /var/lib/matchbox/assets
.
Profiles
Control Plane Nodes
{
"id": "control-plane",
"name": "control-plane",
"boot": {
"kernel": "/assets/vmlinuz",
"initrd": ["/assets/initramfs.xz"],
"args": [
"initrd=initramfs.xz",
"init_on_alloc=1",
"slab_nomerge",
"pti=on",
"console=tty0",
"console=ttyS0",
"printk.devkmsg=on",
"talos.platform=metal",
"talos.config=http://matchbox.talos.dev/assets/controlplane.yaml"
]
}
}
Note: Be sure to change
http://matchbox.talos.dev
to the endpoint of your matchbox server.
Worker Nodes
{
"id": "default",
"name": "default",
"boot": {
"kernel": "/assets/vmlinuz",
"initrd": ["/assets/initramfs.xz"],
"args": [
"initrd=initramfs.xz",
"init_on_alloc=1",
"slab_nomerge",
"pti=on",
"console=tty0",
"console=ttyS0",
"printk.devkmsg=on",
"talos.platform=metal",
"talos.config=http://matchbox.talos.dev/assets/worker.yaml"
]
}
}
Groups
Now, create the following groups, and ensure that the selector
s are accurate for your specific setup.
{
"id": "control-plane-1",
"name": "control-plane-1",
"profile": "control-plane",
"selector": {
...
}
}
{
"id": "control-plane-2",
"name": "control-plane-2",
"profile": "control-plane",
"selector": {
...
}
}
{
"id": "control-plane-3",
"name": "control-plane-3",
"profile": "control-plane",
"selector": {
...
}
}
{
"id": "default",
"name": "default",
"profile": "default"
}
Boot the Machines
Now that we have our configuration files in place, boot all the machines. Talos will come up on each machine, grab its configuration file, and bootstrap itself.
Bootstrap Etcd
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
1.1.5 - Network Configuration
By default, Talos will run DHCP client on all interfaces which have a link, and that might be enough for most of the cases. If some advanced network configuration is required, it can be done via the machine configuration file.
But sometimes it is required to apply network configuration even before the machine configuration can be fetched from the network.
Kernel Command Line
Talos supports some kernel command line parameters to configure network before the machine configuration is fetched.
Note: Kernel command line parameters are not persisted after Talos installation, so proper network configuration should be done via the machine configuration.
Address, default gateway and DNS servers can be configured via ip=
kernel command line parameter:
ip=172.20.0.2::172.20.0.1:255.255.255.0::eth0.100:::::
Bonding can be configured via bond=
kernel command line parameter:
bond=bond0:eth0,eth1:balance-rr
VLANs can be configured via vlan=
kernel command line parameter:
vlan=eth0.100:eth0
See kernel parameters reference for more details.
Platform Network Configuration
Some platforms (e.g. AWS, Google Cloud, etc.) have their own network configuration mechanisms, which can be used to perform the initial network configuration.
There is no such mechanism for bare-metal platforms, so Talos provides a way to use platform network config on the metal
platform to submit the initial network configuration.
The platform network configuration is a YAML document which contains resource specifications for various network resources.
For the metal
platform, the interactive dashboard can be used to edit the platform network configuration.
The current value of the platform network configuration can be retrieved using the MetaKeys
resource (key 0xa
):
talosctl get meta 0xa
The platform network configuration can be updated using the talosctl meta
command for the running node:
talosctl meta write 0xa '{"externalIPs": ["1.2.3.4"]}'
talosctl meta delete 0xa
The initial platform network configuration for the metal
platform can be also included into the generated Talos image:
docker run --rm -i ghcr.io/siderolabs/imager:v1.4.8 iso --arch amd64 --tar-to-stdout --meta 0xa='{...}' | tar xz
docker run --rm -i --privileged ghcr.io/siderolabs/imager:v1.4.8 image --platform metal --arch amd64 --tar-to-stdout --meta 0xa='{...}' | tar xz
The platform network configuration gets merged with other sources of network configuration, the details can be found in the network resources guide.
1.1.6 - PXE
Talos can be installed on bare-metal using PXE service. There are two more detailed guides for PXE booting using Matchbox and Digital Rebar.
This guide describes generic steps for PXE booting Talos on bare-metal.
First, download the vmlinuz
and initramfs
assets from the Talos releases page.
Set up the machines to PXE boot from the network (usually by setting the boot order in the BIOS).
There might be options specific to the hardware being used, booting in BIOS or UEFI mode, using iPXE, etc.
Talos requires the following kernel parameters to be set on the initial boot:
talos.platform=metal
slab_nomerge
pti=on
When booted from the network without machine configuration, Talos will start in maintenance mode.
Please follow the getting started guide for the generic steps on how to install Talos.
See kernel parameters reference for the list of kernel parameters supported by Talos.
Note: If there is already a Talos installation on the disk, the machine will boot into that installation when booting from network. The boot order should prefer disk over network.
Talos can automatically fetch the machine configuration from the network on the initial boot using talos.config
kernel parameter.
A metadata service (HTTP service) can be implemented to deliver customized configuration to each node for example by using the MAC address of the node:
talos.config=https://metadata.service/talos/config?mac=${mac}
Note: The
talos.config
kernel parameter supports other substitution variables, see kernel parameters reference for the full list.
1.1.7 - Sidero Metal
Sidero Metal is a project created by the Talos team that provides a bare metal installer for Cluster API, and that has native support for Talos Linux. It can be easily installed using clusterctl. The best way to get started with Sidero Metal is to visit the website.
1.2 - Virtualized Platforms
1.2.1 - Hyper-V
Pre-requisities
- Download the latest
talos-amd64.iso
ISO from github releases page - Create a New-TalosVM folder in any of your PS Module Path folders
$env:PSModulePath -split ';'
and save the New-TalosVM.psm1 there
Plan Overview
Here we will create a basic 3 node cluster with a single control-plane node and two worker nodes. The only difference between control plane and worker node is the amount of RAM and an additional storage VHD. This is personal preference and can be configured to your liking.
We are using a VMNamePrefix
argument for a VM Name prefix and not the full hostname.
This command will find any existing VM with that prefix and “+1” the highest suffix it finds.
For example, if VMs talos-cp01
and talos-cp02
exist, this will create VMs starting from talos-cp03
, depending on NumberOfVMs argument.
Setup a Control Plane Node
Use the following command to create a single control plane node:
New-TalosVM -VMNamePrefix talos-cp -CPUCount 2 -StartupMemory 4GB -SwitchName LAB -TalosISOPath C:\ISO\talos-amd64.iso -NumberOfVMs 1 -VMDestinationBasePath 'D:\Virtual Machines\Test VMs\Talos'
This will create talos-cp01
VM and power it on.
Setup Worker Nodes
Use the following command to create 2 worker nodes:
New-TalosVM -VMNamePrefix talos-worker -CPUCount 4 -StartupMemory 8GB -SwitchName LAB -TalosISOPath C:\ISO\talos-amd64.iso -NumberOfVMs 2 -VMDestinationBasePath 'D:\Virtual Machines\Test VMs\Talos' -StorageVHDSize 50GB
This will create two VMs: talos-worker01
and talos-wworker02
and attach an additional VHD of 50GB for storage (which in my case will be passed to Mayastor).
Pushing Config to the Nodes
Now that our VMs are ready, find their IP addresses from console of VM. With that information, push config to the control plane node with:
# set control plane IP variable
$CONTROL_PLANE_IP='10.10.10.x'
# Generate talos config
talosctl gen config talos-cluster https://$($CONTROL_PLANE_IP):6443 --output-dir .
# Apply config to control plane node
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file .\controlplane.yaml
Pushing Config to Worker Nodes
Similarly, for the workers:
talosctl apply-config --insecure --nodes 10.10.10.x --file .\worker.yaml
Apply the config to both nodes.
Bootstrap Cluster
Now that our nodes are ready, we are ready to bootstrap the Kubernetes cluster.
# Use following command to set node and endpoint permanantly in config so you dont have to type it everytime
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP
# Bootstrap cluster
talosctl bootstrap
# Generate kubeconfig
talosctl kubeconfig .
This will generate the kubeconfig
file, you can use to connect to the cluster.
1.2.2 - KVM
Talos is known to work on KVM.
We don’t yet have a documented guide specific to KVM; however, you can have a look at our Vagrant & Libvirt guide which uses KVM for virtualization.
If you run into any issues, our community can probably help!
1.2.3 - Proxmox
In this guide we will create a Kubernetes cluster using Proxmox.
Video Walkthrough
To see a live demo of this writeup, visit Youtube here:
Installation
How to Get Proxmox
It is assumed that you have already installed Proxmox onto the server you wish to create Talos VMs on. Visit the Proxmox downloads page if necessary.
Install talosctl
You can download talosctl
via
curl -sL https://talos.dev/install | sh
Download ISO Image
In order to install Talos in Proxmox, you will need the ISO image from the Talos release page.
You can download talos-amd64.iso
via
github.com/siderolabs/talos/releases
mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/<version>/talos-<arch>.iso -L -o _out/talos-<arch>.iso
For example version v1.4.8
for linux
platform:
mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/v1.4.8/talos-amd64.iso -L -o _out/talos-amd64.iso
Upload ISO
From the Proxmox UI, select the “local” storage and enter the “Content” section. Click the “Upload” button:
Select the ISO you downloaded previously, then hit “Upload”
Create VMs
Before starting, familiarise yourself with the system requirements for Talos and assign VM resources accordingly.
Create a new VM by clicking the “Create VM” button in the Proxmox UI:
Fill out a name for the new VM:
In the OS tab, select the ISO we uploaded earlier:
Keep the defaults set in the “System” tab.
Keep the defaults in the “Hard Disk” tab as well, only changing the size if desired.
In the “CPU” section, give at least 2 cores to the VM:
Note: As of Talos v1.0 (which requires the x86-64-v2 microarchitecture), prior to Proxmox V8.0, booting with the default Processor Type
kvm64
will not work. You can enable the required CPU features after creating the VM by adding the following line in the corresponding/etc/pve/qemu-server/<vmid>.conf
file:args: -cpu kvm64,+cx16,+lahf_lm,+popcnt,+sse3,+ssse3,+sse4.1,+sse4.2
Alternatively, you can set the Processor Type to
host
if your Proxmox host supports these CPU features, this however prevents using live VM migration.
Verify that the RAM is set to at least 2GB:
Keep the default values for networking, verifying that the VM is set to come up on the bridge interface:
Finish creating the VM by clicking through the “Confirm” tab and then “Finish”.
Repeat this process for a second VM to use as a worker node. You can also repeat this for additional nodes desired.
Note: Talos doesn’t support memory hot plugging, if creating the VM programmatically don’t enable memory hotplug on your Talos VM’s. Doing so will cause Talos to be unable to see all available memory and have insufficient memory to complete installation of the cluster.
Start Control Plane Node
Once the VMs have been created and updated, start the VM that will be the first control plane node. This VM will boot the ISO image specified earlier and enter “maintenance mode”.
With DHCP server
Once the machine has entered maintenance mode, there will be a console log that details the IP address that the node received.
Take note of this IP address, which will be referred to as $CONTROL_PLANE_IP
for the rest of this guide.
If you wish to export this IP as a bash variable, simply issue a command like export CONTROL_PLANE_IP=1.2.3.4
.
Without DHCP server
To apply the machine configurations in maintenance mode, VM has to have IP on the network. So you can set it on boot time manually.
Press e
on the boot time.
And set the IP parameters for the VM.
Format is:
ip=<client-ip>:<srv-ip>:<gw-ip>:<netmask>:<host>:<device>:<autoconf>
For example $CONTROL_PLANE_IP will be 192.168.0.100 and gateway 192.168.0.1
linux /boot/vmlinuz init_on_alloc=1 slab_nomerge pti=on panic=0 consoleblank=0 printk.devkmsg=on earlyprintk=ttyS0 console=tty0 console=ttyS0 talos.platform=metal ip=192.168.0.100::192.168.0.1:255.255.255.0::eth0:off
Then press Ctrl-x or F10
Generate Machine Configurations
With the IP address above, you can now generate the machine configurations to use for installing Talos and Kubernetes. Issue the following command, updating the output directory, cluster name, and control plane IP as you see fit:
talosctl gen config talos-vbox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out
This will create several files in the _out
directory: controlplane.yaml
, worker.yaml
, and talosconfig
.
Note: The Talos config by default will install to
/dev/sda
. Depending on your setup the virtual disk may be mounted differently Eg:/dev/vda
. You can check for disks running the following command:talosctl disks --insecure --nodes $CONTROL_PLANE_IP
Update
controlplane.yaml
andworker.yaml
config files to point to the correct disk location.
Create Control Plane Node
Using the controlplane.yaml
generated above, you can now apply this config using talosctl.
Issue:
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file _out/controlplane.yaml
You should now see some action in the Proxmox console for this VM. Talos will be installed to disk, the VM will reboot, and then Talos will configure the Kubernetes control plane on this VM.
Note: This process can be repeated multiple times to create an HA control plane.
Create Worker Node
Create at least a single worker node using a process similar to the control plane creation above.
Start the worker node VM and wait for it to enter “maintenance mode”.
Take note of the worker node’s IP address, which will be referred to as $WORKER_IP
Issue:
talosctl apply-config --insecure --nodes $WORKER_IP --file _out/worker.yaml
Note: This process can be repeated multiple times to add additional workers.
Using the Cluster
Once the cluster is available, you can make use of talosctl
and kubectl
to interact with the cluster.
For example, to view current running containers, run talosctl containers
for a list of containers in the system
namespace, or talosctl containers -k
for the k8s.io
namespace.
To view the logs of a container, use talosctl logs <container>
or talosctl logs -k <container>
.
First, configure talosctl to talk to your control plane node by issuing the following, updating paths and IPs as necessary:
export TALOSCONFIG="_out/talosconfig"
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP
Bootstrap Etcd
talosctl bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl kubeconfig .
Cleaning Up
To cleanup, simply stop and delete the virtual machines from the Proxmox UI.
1.2.4 - Vagrant & Libvirt
Pre-requisities
- Linux OS
- Vagrant installed
- vagrant-libvirt plugin installed
- talosctl installed
- kubectl installed
Overview
We will use Vagrant and its libvirt plugin to create a KVM-based cluster with 3 control plane nodes and 1 worker node.
For this, we will mount Talos ISO into the VMs using a virtual CD-ROM, and configure the VMs to attempt to boot from the disk first with the fallback to the CD-ROM.
We will also configure a virtual IP address on Talos to achieve high-availability on kube-apiserver.
Preparing the environment
First, we download the latest talos-amd64.iso
ISO from GitHub releases into the /tmp
directory.
wget --timestamping https://github.com/siderolabs/talos/releases/download/v1.4.8/talos-amd64.iso -O /tmp/talos-amd64.iso
Create a Vagrantfile
with the following contents:
Vagrant.configure("2") do |config|
config.vm.define "control-plane-node-1" do |vm|
vm.vm.provider :libvirt do |domain|
domain.cpus = 2
domain.memory = 2048
domain.serial :type => "file", :source => {:path => "/tmp/control-plane-node-1.log"}
domain.storage :file, :device => :cdrom, :path => "/tmp/talos-amd64.iso"
domain.storage :file, :size => '4G', :type => 'raw'
domain.boot 'hd'
domain.boot 'cdrom'
end
end
config.vm.define "control-plane-node-2" do |vm|
vm.vm.provider :libvirt do |domain|
domain.cpus = 2
domain.memory = 2048
domain.serial :type => "file", :source => {:path => "/tmp/control-plane-node-2.log"}
domain.storage :file, :device => :cdrom, :path => "/tmp/talos-amd64.iso"
domain.storage :file, :size => '4G', :type => 'raw'
domain.boot 'hd'
domain.boot 'cdrom'
end
end
config.vm.define "control-plane-node-3" do |vm|
vm.vm.provider :libvirt do |domain|
domain.cpus = 2
domain.memory = 2048
domain.serial :type => "file", :source => {:path => "/tmp/control-plane-node-3.log"}
domain.storage :file, :device => :cdrom, :path => "/tmp/talos-amd64.iso"
domain.storage :file, :size => '4G', :type => 'raw'
domain.boot 'hd'
domain.boot 'cdrom'
end
end
config.vm.define "worker-node-1" do |vm|
vm.vm.provider :libvirt do |domain|
domain.cpus = 1
domain.memory = 1024
domain.serial :type => "file", :source => {:path => "/tmp/worker-node-1.log"}
domain.storage :file, :device => :cdrom, :path => "/tmp/talos-amd64.iso"
domain.storage :file, :size => '4G', :type => 'raw'
domain.boot 'hd'
domain.boot 'cdrom'
end
end
end
Bring up the nodes
Check the status of vagrant VMs:
vagrant status
You should see the VMs in “not created” state:
Current machine states:
control-plane-node-1 not created (libvirt)
control-plane-node-2 not created (libvirt)
control-plane-node-3 not created (libvirt)
worker-node-1 not created (libvirt)
Bring up the vagrant environment:
vagrant up --provider=libvirt
Check the status again:
vagrant status
Now you should see the VMs in “running” state:
Current machine states:
control-plane-node-1 running (libvirt)
control-plane-node-2 running (libvirt)
control-plane-node-3 running (libvirt)
worker-node-1 running (libvirt)
Find out the IP addresses assigned by the libvirt DHCP by running:
virsh list | grep vagrant | awk '{print $2}' | xargs -t -L1 virsh domifaddr
Output will look like the following:
virsh domifaddr vagrant_control-plane-node-2
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet0 52:54:00:f9:10:e5 ipv4 192.168.121.119/24
virsh domifaddr vagrant_control-plane-node-1
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet1 52:54:00:0f:ae:59 ipv4 192.168.121.203/24
virsh domifaddr vagrant_worker-node-1
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet2 52:54:00:6f:28:95 ipv4 192.168.121.69/24
virsh domifaddr vagrant_control-plane-node-3
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet3 52:54:00:03:45:10 ipv4 192.168.121.125/24
Our control plane nodes have the IPs: 192.168.121.203
, 192.168.121.119
, 192.168.121.125
and the worker node has the IP 192.168.121.69
.
Now you should be able to interact with Talos nodes that are in maintenance mode:
talosctl -n 192.168.121.203 disks --insecure
Sample output:
DEV MODEL SERIAL TYPE UUID WWID MODALIAS NAME SIZE BUS_PATH
/dev/vda - - HDD - - virtio:d00000002v00001AF4 - 8.6 GB /pci0000:00/0000:00:03.0/virtio0/
Installing Talos
Pick an endpoint IP in the vagrant-libvirt
subnet but not used by any nodes, for example 192.168.121.100
.
Generate a machine configuration:
talosctl gen config my-cluster https://192.168.121.100:6443 --install-disk /dev/vda
Edit controlplane.yaml
to add the virtual IP you picked to a network interface under .machine.network.interfaces
, for example:
machine:
network:
interfaces:
- interface: eth0
dhcp: true
vip:
ip: 192.168.121.100
Apply the configuration to the initial control plane node:
talosctl -n 192.168.121.203 apply-config --insecure --file controlplane.yaml
You can tail the logs of the node:
sudo tail -f /tmp/control-plane-node-1.log
Set up your shell to use the generated talosconfig and configure its endpoints (use the IPs of the control plane nodes):
export TALOSCONFIG=$(realpath ./talosconfig)
talosctl config endpoint 192.168.121.203 192.168.121.119 192.168.121.125
Bootstrap the Kubernetes cluster from the initial control plane node:
talosctl -n 192.168.121.203 bootstrap
Finally, apply the machine configurations to the remaining nodes:
talosctl -n 192.168.121.119 apply-config --insecure --file controlplane.yaml
talosctl -n 192.168.121.125 apply-config --insecure --file controlplane.yaml
talosctl -n 192.168.121.69 apply-config --insecure --file worker.yaml
After a while, you should see that all the members have joined:
talosctl -n 192.168.121.203 get members
The output will be like the following:
NODE NAMESPACE TYPE ID VERSION HOSTNAME MACHINE TYPE OS ADDRESSES
192.168.121.203 cluster Member talos-192-168-121-119 1 talos-192-168-121-119 controlplane Talos (v1.1.0) ["192.168.121.119"]
192.168.121.203 cluster Member talos-192-168-121-69 1 talos-192-168-121-69 worker Talos (v1.1.0) ["192.168.121.69"]
192.168.121.203 cluster Member talos-192-168-121-203 6 talos-192-168-121-203 controlplane Talos (v1.1.0) ["192.168.121.100","192.168.121.203"]
192.168.121.203 cluster Member talos-192-168-121-125 1 talos-192-168-121-125 controlplane Talos (v1.1.0) ["192.168.121.125"]
Interacting with Kubernetes cluster
Retrieve the kubeconfig from the cluster:
talosctl -n 192.168.121.203 kubeconfig ./kubeconfig
List the nodes in the cluster:
kubectl --kubeconfig ./kubeconfig get node -owide
You will see an output similar to:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
talos-192-168-121-203 Ready control-plane,master 3m10s v1.24.2 192.168.121.203 <none> Talos (v1.1.0) 5.15.48-talos containerd://1.6.6
talos-192-168-121-69 Ready <none> 2m25s v1.24.2 192.168.121.69 <none> Talos (v1.1.0) 5.15.48-talos containerd://1.6.6
talos-192-168-121-119 Ready control-plane,master 8m46s v1.24.2 192.168.121.119 <none> Talos (v1.1.0) 5.15.48-talos containerd://1.6.6
talos-192-168-121-125 Ready control-plane,master 3m11s v1.24.2 192.168.121.125 <none> Talos (v1.1.0) 5.15.48-talos containerd://1.6.6
Congratulations, you have a highly-available Talos cluster running!
Cleanup
You can destroy the vagrant environment by running:
vagrant destroy -f
And remove the ISO image you downloaded:
sudo rm -f /tmp/talos-amd64.iso
1.2.5 - VMware
Creating a Cluster via the govc
CLI
In this guide we will create an HA Kubernetes cluster with 2 worker nodes.
We will use the govc
cli which can be downloaded here.
Prereqs/Assumptions
This guide will use the virtual IP (“VIP”) functionality that is built into Talos in order to provide a stable, known IP for the Kubernetes control plane. This simply means the user should pick an IP on their “VM Network” to designate for this purpose and keep it handy for future steps.
Create the Machine Configuration Files
Generating Base Configurations
Using the VIP chosen in the prereq steps, we will now generate the base configuration files for the Talos machines.
This can be done with the talosctl gen config ...
command.
Take note that we will also use a JSON6902 patch when creating the configs so that the control plane nodes get some special information about the VIP we chose earlier, as well as a daemonset to install vmware tools on talos nodes.
First, download cp.patch.yaml
to your local machine and edit the VIP to match your chosen IP.
You can do this by issuing: curl -fsSLO https://raw.githubusercontent.com/siderolabs/talos/master/website/content/v1.4/talos-guides/install/virtualized-platforms/vmware/cp.patch.yaml
.
It’s contents should look like the following:
- op: add
path: /machine/network
value:
interfaces:
- interface: eth0
dhcp: true
vip:
ip: <VIP>
- op: replace
path: /cluster/extraManifests
value:
- "https://raw.githubusercontent.com/mologie/talos-vmtoolsd/master/deploy/unstable.yaml"
With the patch in hand, generate machine configs with:
$ talosctl gen config vmware-test https://<VIP>:<port> --config-patch-control-plane @cp.patch.yaml
created controlplane.yaml
created worker.yaml
created talosconfig
At this point, you can modify the generated configs to your liking if needed.
Optionally, you can specify additional patches by adding to the cp.patch.yaml
file downloaded earlier, or create your own patch files.
Validate the Configuration Files
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config worker.yaml --mode cloud
worker.yaml is valid for cloud mode
Set Environment Variables
govc
makes use of the following environment variables
export GOVC_URL=<vCenter url>
export GOVC_USERNAME=<vCenter username>
export GOVC_PASSWORD=<vCenter password>
Note: If your vCenter installation makes use of self signed certificates, you’ll want to export
GOVC_INSECURE=true
.
There are some additional variables that you may need to set:
export GOVC_DATACENTER=<vCenter datacenter>
export GOVC_RESOURCE_POOL=<vCenter resource pool>
export GOVC_DATASTORE=<vCenter datastore>
export GOVC_NETWORK=<vCenter network>
Choose Install Approach
As part of this guide, we have a more automated install script that handles some of the complexity of importing OVAs and creating VMs. If you wish to use this script, we will detail that next. If you wish to carry out the manual approach, simply skip ahead to the “Manual Approach” section.
Scripted Install
Download the vmware.sh
script to your local machine.
You can do this by issuing curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/v1.4/talos-guides/install/virtualized-platforms/vmware/vmware.sh"
.
This script has default variables for things like Talos version and cluster name that may be interesting to tweak before deploying.
Import OVA
To create a content library and import the Talos OVA corresponding to the mentioned Talos version, simply issue:
./vmware.sh upload_ova
Create Cluster
With the OVA uploaded to the content library, you can create a 5 node (by default) cluster with 3 control plane and 2 worker nodes:
./vmware.sh create
This step will create a VM from the OVA, edit the settings based on the env variables used for VM size/specs, then power on the VMs.
You may now skip past the “Manual Approach” section down to “Bootstrap Cluster”.
Manual Approach
Import the OVA into vCenter
A talos.ova
asset is published with each release.
We will refer to the version of the release as $TALOS_VERSION
below.
It can be easily exported with export TALOS_VERSION="v0.3.0-alpha.10"
or similar.
curl -LO https://github.com/siderolabs/talos/releases/download/$TALOS_VERSION/talos.ova
Create a content library (if needed) with:
govc library.create <library name>
Import the OVA to the library with:
govc library.import -n talos-${TALOS_VERSION} <library name> /path/to/downloaded/talos.ova
Create the Bootstrap Node
We’ll clone the OVA to create the bootstrap node (our first control plane node).
govc library.deploy <library name>/talos-${TALOS_VERSION} control-plane-1
Talos makes use of the guestinfo
facility of VMware to provide the machine/cluster configuration.
This can be set using the govc vm.change
command.
To facilitate persistent storage using the vSphere cloud provider integration with Kubernetes, disk.enableUUID=1
is used.
govc vm.change \
-e "guestinfo.talos.config=$(cat controlplane.yaml | base64)" \
-e "disk.enableUUID=1" \
-vm control-plane-1
Update Hardware Resources for the Bootstrap Node
-c
is used to configure the number of cpus-m
is used to configure the amount of memory (in MB)
govc vm.change \
-c 2 \
-m 4096 \
-vm control-plane-1
The following can be used to adjust the EPHEMERAL disk size.
govc vm.disk.change -vm control-plane-1 -disk.name disk-1000-0 -size 10G
govc vm.power -on control-plane-1
Create the Remaining Control Plane Nodes
govc library.deploy <library name>/talos-${TALOS_VERSION} control-plane-2
govc vm.change \
-e "guestinfo.talos.config=$(base64 controlplane.yaml)" \
-e "disk.enableUUID=1" \
-vm control-plane-2
govc library.deploy <library name>/talos-${TALOS_VERSION} control-plane-3
govc vm.change \
-e "guestinfo.talos.config=$(base64 controlplane.yaml)" \
-e "disk.enableUUID=1" \
-vm control-plane-3
govc vm.change \
-c 2 \
-m 4096 \
-vm control-plane-2
govc vm.change \
-c 2 \
-m 4096 \
-vm control-plane-3
govc vm.disk.change -vm control-plane-2 -disk.name disk-1000-0 -size 10G
govc vm.disk.change -vm control-plane-3 -disk.name disk-1000-0 -size 10G
govc vm.power -on control-plane-2
govc vm.power -on control-plane-3
Update Settings for the Worker Nodes
govc library.deploy <library name>/talos-${TALOS_VERSION} worker-1
govc vm.change \
-e "guestinfo.talos.config=$(base64 worker.yaml)" \
-e "disk.enableUUID=1" \
-vm worker-1
govc library.deploy <library name>/talos-${TALOS_VERSION} worker-2
govc vm.change \
-e "guestinfo.talos.config=$(base64 worker.yaml)" \
-e "disk.enableUUID=1" \
-vm worker-2
govc vm.change \
-c 4 \
-m 8192 \
-vm worker-1
govc vm.change \
-c 4 \
-m 8192 \
-vm worker-2
govc vm.disk.change -vm worker-1 -disk.name disk-1000-0 -size 10G
govc vm.disk.change -vm worker-2 -disk.name disk-1000-0 -size 10G
govc vm.power -on worker-1
govc vm.power -on worker-2
Bootstrap Cluster
In the vSphere UI, open a console to one of the control plane nodes. You should see some output stating that etcd should be bootstrapped. This text should look like:
"etcd is waiting to join the cluster, if this node is the first node in the cluster, please run `talosctl bootstrap` against one of the following IPs:
Take note of the IP mentioned here and issue:
talosctl --talosconfig talosconfig bootstrap -e <control plane IP> -n <control plane IP>
Keep this IP handy for the following steps as well.
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig config endpoint <control plane IP>
talosctl --talosconfig talosconfig config node <control plane IP>
talosctl --talosconfig talosconfig kubeconfig .
Configure talos-vmtoolsd
The talos-vmtoolsd application was deployed as a daemonset as part of the cluster creation; however, we must now provide a talos credentials file for it to use.
Create a new talosconfig with:
talosctl --talosconfig talosconfig -n <control plane IP> config new vmtoolsd-secret.yaml --roles os:admin
Create a secret from the talosconfig:
kubectl -n kube-system create secret generic talos-vmtoolsd-config \
--from-file=talosconfig=./vmtoolsd-secret.yaml
Clean up the generated file from local system:
rm vmtoolsd-secret.yaml
Once configured, you should now see these daemonset pods go into “Running” state and in vCenter, you will now see IPs and info from the Talos nodes present in the UI.
1.2.6 - Xen
Talos is known to work on Xen. We don’t yet have a documented guide specific to Xen; however, you can follow the General Getting Started Guide. If you run into any issues, our community can probably help!
1.3 - Cloud Platforms
1.3.1 - AWS
Creating a Cluster via the AWS CLI
In this guide we will create an HA Kubernetes cluster with 3 worker nodes. We assume an existing VPC, and some familiarity with AWS. If you need more information on AWS specifics, please see the official AWS documentation.
Set the needed info
Change to your desired region:
REGION="us-west-2"
aws ec2 describe-vpcs --region $REGION
VPC="(the VpcId from the above command)"
Create the Subnet
Use a CIDR block that is present on the VPC specified above.
aws ec2 create-subnet \
--region $REGION \
--vpc-id $VPC \
--cidr-block ${CIDR_BLOCK}
Note the subnet ID that was returned, and assign it to a variable for ease of later use:
SUBNET="(the subnet ID of the created subnet)"
Official AMI Images
Official AMI image ID can be found in the cloud-images.json
file attached to the Talos release:
AMI=`curl -sL https://github.com/siderolabs/talos/releases/download/v1.4.8/cloud-images.json | \
jq -r '.[] | select(.region == "'$REGION'") | select (.arch == "amd64") | .id'`
echo $AMI
Replace amd64
in the line above with the desired architecture.
Note the AMI id that is returned is assigned to an environment variable: it will be used later when booting instances.
If using the official AMIs, you can skip to Creating the Security group
Create your own AMIs
The use of the official Talos AMIs are recommended, but if you wish to build your own AMIs, follow the procedure below.
Create the S3 Bucket
aws s3api create-bucket \
--bucket $BUCKET \
--create-bucket-configuration LocationConstraint=$REGION \
--acl private
Create the vmimport
Role
In order to create an AMI, ensure that the vmimport
role exists as described in the official AWS documentation.
Note that the role should be associated with the S3 bucket we created above.
Create the Image Snapshot
First, download the AWS image from a Talos release:
curl -L https://github.com/siderolabs/talos/releases/download/v1.4.8/aws-amd64.tar.gz | tar -xv
Copy the RAW disk to S3 and import it as a snapshot:
aws s3 cp disk.raw s3://$BUCKET/talos-aws-tutorial.raw
aws ec2 import-snapshot \
--region $REGION \
--description "Talos kubernetes tutorial" \
--disk-container "Format=raw,UserBucket={S3Bucket=$BUCKET,S3Key=talos-aws-tutorial.raw}"
Save the SnapshotId
, as we will need it once the import is done.
To check on the status of the import, run:
aws ec2 describe-import-snapshot-tasks \
--region $REGION \
--import-task-ids
Once the SnapshotTaskDetail.Status
indicates completed
, we can register the image.
Register the Image
aws ec2 register-image \
--region $REGION \
--block-device-mappings "DeviceName=/dev/xvda,VirtualName=talos,Ebs={DeleteOnTermination=true,SnapshotId=$SNAPSHOT,VolumeSize=4,VolumeType=gp2}" \
--root-device-name /dev/xvda \
--virtualization-type hvm \
--architecture x86_64 \
--ena-support \
--name talos-aws-tutorial-ami
We now have an AMI we can use to create our cluster. Save the AMI ID, as we will need it when we create EC2 instances.
AMI="(AMI ID of the register image command)"
Create a Security Group
aws ec2 create-security-group \
--region $REGION \
--group-name talos-aws-tutorial-sg \
--description "Security Group for EC2 instances to allow ports required by Talos"
SECURITY_GROUP="(security group id that is returned)"
Using the security group from above, allow all internal traffic within the same security group:
aws ec2 authorize-security-group-ingress \
--region $REGION \
--group-name talos-aws-tutorial-sg \
--protocol all \
--port 0 \
--source-group talos-aws-tutorial-sg
and expose the Talos and Kubernetes APIs:
aws ec2 authorize-security-group-ingress \
--region $REGION \
--group-name talos-aws-tutorial-sg \
--protocol tcp \
--port 6443 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--region $REGION \
--group-name talos-aws-tutorial-sg \
--protocol tcp \
--port 50000-50001 \
--cidr 0.0.0.0/0
Create a Load Balancer
aws elbv2 create-load-balancer \
--region $REGION \
--name talos-aws-tutorial-lb \
--type network --subnets $SUBNET
Take note of the DNS name and ARN. We will need these soon.
LOAD_BALANCER_ARN="(arn of the load balancer)"
aws elbv2 create-target-group \
--region $REGION \
--name talos-aws-tutorial-tg \
--protocol TCP \
--port 6443 \
--target-type ip \
--vpc-id $VPC
Also note the TargetGroupArn
that is returned.
TARGET_GROUP_ARN="(target group arn)"
Create the Machine Configuration Files
Using the DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines.
Note that the
port
used here is the externally accessible port configured on the load balancer - 443 - not the internal port of 6443:
$ talosctl gen config talos-k8s-aws-tutorial https://<load balancer DNS>:<port> --with-examples=false --with-docs=false
created controlplane.yaml
created worker.yaml
created talosconfig
Note that the generated configs are too long for AWS userdata field if the
--with-examples
and--with-docs
flags are not passed.
At this point, you can modify the generated configs to your liking.
Optionally, you can specify --config-patch
with RFC6902 jsonpatch which will be applied during the config generation.
Validate the Configuration Files
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config worker.yaml --mode cloud
worker.yaml is valid for cloud mode
Create the EC2 Instances
change the instance type if desired. Note: There is a known issue that prevents Talos from running on T2 instance types. Please use T3 if you need burstable instance types.
Create the Control Plane Nodes
CP_COUNT=1
while [[ "$CP_COUNT" -lt 4 ]]; do
aws ec2 run-instances \
--region $REGION \
--image-id $AMI \
--count 1 \
--instance-type t3.small \
--user-data file://controlplane.yaml \
--subnet-id $SUBNET \
--security-group-ids $SECURITY_GROUP \
--associate-public-ip-address \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-cp-$CP_COUNT}]"
((CP_COUNT++))
done
Make a note of the resulting
PrivateIpAddress
from the controlplane nodes for later use.
Create the Worker Nodes
aws ec2 run-instances \
--region $REGION \
--image-id $AMI \
--count 3 \
--instance-type t3.small \
--user-data file://worker.yaml \
--subnet-id $SUBNET \
--security-group-ids $SECURITY_GROUP
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-worker}]"
Configure the Load Balancer
Now, using the load balancer target group’s ARN, and the PrivateIpAddress from the controlplane instances that you created :
aws elbv2 register-targets \
--region $REGION \
--target-group-arn $TARGET_GROUP_ARN \
--targets Id=$CP_NODE_1_IP Id=$CP_NODE_2_IP Id=$CP_NODE_3_IP
Using the ARNs of the load balancer and target group from previous steps, create the listener:
aws elbv2 create-listener \
--region $REGION \
--load-balancer-arn $LOAD_BALANCER_ARN \
--protocol TCP \
--port 443 \
--default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_ARN
Bootstrap Etcd
Set the endpoints
(the control plane node to which talosctl
commands are sent) and nodes
(the nodes that the command operates on):
talosctl --talosconfig talosconfig config endpoint <control plane 1 PUBLIC IP>
talosctl --talosconfig talosconfig config node <control plane 1 PUBLIC IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
The different control plane nodes should sendi/receive traffic via the load balancer, notice that one of the control plane has intiated the etcd cluster, and the others should join. You can now watch as your cluster bootstraps, by using
talosctl --talosconfig talosconfig health
You can also watch the performance of a node, via:
talosctl --talosconfig talosconfig dashboard
And use standard kubectl
commands.
1.3.2 - Azure
Creating a Cluster via the CLI
In this guide we will create an HA Kubernetes cluster with 1 worker node. We assume existing Blob Storage, and some familiarity with Azure. If you need more information on Azure specifics, please see the official Azure documentation.
Environment Setup
We’ll make use of the following environment variables throughout the setup. Edit the variables below with your correct information.
# Storage account to use
export STORAGE_ACCOUNT="StorageAccountName"
# Storage container to upload to
export STORAGE_CONTAINER="StorageContainerName"
# Resource group name
export GROUP="ResourceGroupName"
# Location
export LOCATION="centralus"
# Get storage account connection string based on info above
export CONNECTION=$(az storage account show-connection-string \
-n $STORAGE_ACCOUNT \
-g $GROUP \
-o tsv)
Create the Image
First, download the Azure image from a Talos release.
Once downloaded, untar with tar -xvf /path/to/azure-amd64.tar.gz
Upload the VHD
Once you have pulled down the image, you can upload it to blob storage with:
az storage blob upload \
--connection-string $CONNECTION \
--container-name $STORAGE_CONTAINER \
-f /path/to/extracted/talos-azure.vhd \
-n talos-azure.vhd
Register the Image
Now that the image is present in our blob storage, we’ll register it.
az image create \
--name talos \
--source https://$STORAGE_ACCOUNT.blob.core.windows.net/$STORAGE_CONTAINER/talos-azure.vhd \
--os-type linux \
-g $GROUP
Network Infrastructure
Virtual Networks and Security Groups
Once the image is prepared, we’ll want to work through setting up the network. Issue the following to create a network security group and add rules to it.
# Create vnet
az network vnet create \
--resource-group $GROUP \
--location $LOCATION \
--name talos-vnet \
--subnet-name talos-subnet
# Create network security group
az network nsg create -g $GROUP -n talos-sg
# Client -> apid
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n apid \
--priority 1001 \
--destination-port-ranges 50000 \
--direction inbound
# Trustd
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n trustd \
--priority 1002 \
--destination-port-ranges 50001 \
--direction inbound
# etcd
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n etcd \
--priority 1003 \
--destination-port-ranges 2379-2380 \
--direction inbound
# Kubernetes API Server
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n kube \
--priority 1004 \
--destination-port-ranges 6443 \
--direction inbound
Load Balancer
We will create a public ip, load balancer, and a health check that we will use for our control plane.
# Create public ip
az network public-ip create \
--resource-group $GROUP \
--name talos-public-ip \
--allocation-method static
# Create lb
az network lb create \
--resource-group $GROUP \
--name talos-lb \
--public-ip-address talos-public-ip \
--frontend-ip-name talos-fe \
--backend-pool-name talos-be-pool
# Create health check
az network lb probe create \
--resource-group $GROUP \
--lb-name talos-lb \
--name talos-lb-health \
--protocol tcp \
--port 6443
# Create lb rule for 6443
az network lb rule create \
--resource-group $GROUP \
--lb-name talos-lb \
--name talos-6443 \
--protocol tcp \
--frontend-ip-name talos-fe \
--frontend-port 6443 \
--backend-pool-name talos-be-pool \
--backend-port 6443 \
--probe-name talos-lb-health
Network Interfaces
In Azure, we have to pre-create the NICs for our control plane so that they can be associated with our load balancer.
for i in $( seq 0 1 2 ); do
# Create public IP for each nic
az network public-ip create \
--resource-group $GROUP \
--name talos-controlplane-public-ip-$i \
--allocation-method static
# Create nic
az network nic create \
--resource-group $GROUP \
--name talos-controlplane-nic-$i \
--vnet-name talos-vnet \
--subnet talos-subnet \
--network-security-group talos-sg \
--public-ip-address talos-controlplane-public-ip-$i\
--lb-name talos-lb \
--lb-address-pools talos-be-pool
done
# NOTES:
# Talos can detect PublicIPs automatically if PublicIP SKU is Basic.
# Use `--sku Basic` to set SKU to Basic.
Cluster Configuration
With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.
LB_PUBLIC_IP=$(az network public-ip show \
--resource-group $GROUP \
--name talos-public-ip \
--query [ipAddress] \
--output tsv)
talosctl gen config talos-k8s-azure-tutorial https://${LB_PUBLIC_IP}:6443
Compute Creation
We are now ready to create our azure nodes.
Azure allows you to pass Talos machine configuration to the virtual machine at bootstrap time via
user-data
or custom-data
methods.
Talos supports only custom-data
method, machine configuration is available to the VM only on the first boot.
# Create availability set
az vm availability-set create \
--name talos-controlplane-av-set \
-g $GROUP
# Create the controlplane nodes
for i in $( seq 0 1 2 ); do
az vm create \
--name talos-controlplane-$i \
--image talos \
--custom-data ./controlplane.yaml \
-g $GROUP \
--admin-username talos \
--generate-ssh-keys \
--verbose \
--boot-diagnostics-storage $STORAGE_ACCOUNT \
--os-disk-size-gb 20 \
--nics talos-controlplane-nic-$i \
--availability-set talos-controlplane-av-set \
--no-wait
done
# Create worker node
az vm create \
--name talos-worker-0 \
--image talos \
--vnet-name talos-vnet \
--subnet talos-subnet \
--custom-data ./worker.yaml \
-g $GROUP \
--admin-username talos \
--generate-ssh-keys \
--verbose \
--boot-diagnostics-storage $STORAGE_ACCOUNT \
--nsg talos-sg \
--os-disk-size-gb 20 \
--no-wait
# NOTES:
# `--admin-username` and `--generate-ssh-keys` are required by the az cli,
# but are not actually used by talos
# `--os-disk-size-gb` is the backing disk for Kubernetes and any workload containers
# `--boot-diagnostics-storage` is to enable console output which may be necessary
# for troubleshooting
Bootstrap Etcd
You should now be able to interact with your cluster with talosctl
.
We will need to discover the public IP for our first control plane node first.
CONTROL_PLANE_0_IP=$(az network public-ip show \
--resource-group $GROUP \
--name talos-controlplane-public-ip-0 \
--query [ipAddress] \
--output tsv)
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint $CONTROL_PLANE_0_IP
talosctl --talosconfig talosconfig config node $CONTROL_PLANE_0_IP
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
1.3.3 - DigitalOcean
Creating a Talos Linux Cluster on Digital Ocean via the CLI
In this guide we will create an HA Kubernetes cluster with 1 worker node, in the NYC region. We assume an existing Space, and some familiarity with DigitalOcean. If you need more information on DigitalOcean specifics, please see the official DigitalOcean documentation.
Create the Image
Download the DigitalOcean image digital-ocean-amd64.raw.gz
from the latest Talos release.
Note: the minimum version of Talos required to support Digital Ocean is v1.3.3.
Using an upload method of your choice (doctl
does not have Spaces support), upload the image to a space.
(It’s easy to drag the image file to the space using DigitalOcean’s web console.)
Note: Make sure you upload the file as public
.
Now, create an image using the URL of the uploaded image:
export REGION=nyc3
doctl compute image create \
--region $REGION \
--image-description talos-digital-ocean-tutorial \
--image-url https://$SPACENAME.$REGION.digitaloceanspaces.com/digital-ocean-amd64.raw.gz \
Talos
Save the image ID. We will need it when creating droplets.
Create a Load Balancer
doctl compute load-balancer create \
--region $REGION \
--name talos-digital-ocean-tutorial-lb \
--tag-name talos-digital-ocean-tutorial-control-plane \
--health-check protocol:tcp,port:6443,check_interval_seconds:10,response_timeout_seconds:5,healthy_threshold:5,unhealthy_threshold:3 \
--forwarding-rules entry_protocol:tcp,entry_port:443,target_protocol:tcp,target_port:6443
Note the returned ID of the load balancer.
We will need the IP of the load balancer. Using the ID of the load balancer, run:
doctl compute load-balancer get --format IP <load balancer ID>
Note that it may take a few minutes before the load balancer is provisioned, so repeat this command until it returns with the IP address.
Create the Machine Configuration Files
Using the IP address (or DNS name, if you have created one) of the loadbalancer, generate the base configuration files for the Talos machines. Also note that the load balancer forwards port 443 to port 6443 on the associated nodes, so we should use 443 as the port in the config definition:
$ talosctl gen config talos-k8s-digital-ocean-tutorial https://<load balancer IP or DNS>:443
created controlplane.yaml
created worker.yaml
created talosconfig
Create the Droplets
Create a dummy SSH key
Although SSH is not used by Talos, DigitalOcean requires that an SSH key be associated with a droplet during creation. We will create a dummy key that can be used to satisfy this requirement.
doctl compute ssh-key create --public-key "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDbl0I1s/yOETIKjFr7mDLp8LmJn6OIZ68ILjVCkoN6lzKmvZEqEm1YYeWoI0xgb80hQ1fKkl0usW6MkSqwrijoUENhGFd6L16WFL53va4aeJjj2pxrjOr3uBFm/4ATvIfFTNVs+VUzFZ0eGzTgu1yXydX8lZMWnT4JpsMraHD3/qPP+pgyNuI51LjOCG0gVCzjl8NoGaQuKnl8KqbSCARIpETg1mMw+tuYgaKcbqYCMbxggaEKA0ixJ2MpFC/kwm3PcksTGqVBzp3+iE5AlRe1tnbr6GhgT839KLhOB03j7lFl1K9j1bMTOEj5Io8z7xo/XeF2ZQKHFWygAJiAhmKJ dummy@dummy.local" dummy
Note the ssh key ID that is returned - we will use it in creating the droplets.
Create the Control Plane Nodes
Run the following commands to create three control plane nodes:
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--tag-names talos-digital-ocean-tutorial-control-plane \
--user-data-file controlplane.yaml \
--ssh-keys <ssh key ID> \
talos-control-plane-1
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--tag-names talos-digital-ocean-tutorial-control-plane \
--user-data-file controlplane.yaml \
--ssh-keys <ssh key ID> \
talos-control-plane-2
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--tag-names talos-digital-ocean-tutorial-control-plane \
--user-data-file controlplane.yaml \
--ssh-keys <ssh key ID> \
talos-control-plane-3
Note the droplet ID returned for the first control plane node.
Create the Worker Nodes
Run the following to create a worker node:
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--user-data-file worker.yaml \
--ssh-keys <ssh key ID> \
talos-worker-1
Bootstrap Etcd
To configure talosctl
we will need the first control plane node’s IP:
doctl compute droplet get --format PublicIPv4 <droplet ID>
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
We can also watch the cluster bootstrap via:
talosctl --talosconfig talosconfig health
1.3.4 - Exoscale
Talos is known to work on exoscale.com; however, it is currently undocumented.
1.3.5 - GCP
Creating a Cluster via the CLI
In this guide, we will create an HA Kubernetes cluster in GCP with 1 worker node. We will assume an existing Cloud Storage bucket, and some familiarity with Google Cloud. If you need more information on Google Cloud specifics, please see the official Google documentation.
jq and talosctl also needs to be installed
Manual Setup
Environment Setup
We’ll make use of the following environment variables throughout the setup. Edit the variables below with your correct information.
# Storage account to use
export STORAGE_BUCKET="StorageBucketName"
# Region
export REGION="us-central1"
Create the Image
First, download the Google Cloud image from a Talos release.
These images are called gcp-$ARCH.tar.gz
.
Upload the Image
Once you have downloaded the image, you can upload it to your storage bucket with:
gsutil cp /path/to/gcp-amd64.tar.gz gs://$STORAGE_BUCKET
Register the image
Now that the image is present in our bucket, we’ll register it.
gcloud compute images create talos \
--source-uri=gs://$STORAGE_BUCKET/gcp-amd64.tar.gz \
--guest-os-features=VIRTIO_SCSI_MULTIQUEUE
Network Infrastructure
Load Balancers and Firewalls
Once the image is prepared, we’ll want to work through setting up the network. Issue the following to create a firewall, load balancer, and their required components.
130.211.0.0/22
and 35.191.0.0/16
are the GCP Load Balancer IP ranges
# Create Instance Group
gcloud compute instance-groups unmanaged create talos-ig \
--zone $REGION-b
# Create port for IG
gcloud compute instance-groups set-named-ports talos-ig \
--named-ports tcp6443:6443 \
--zone $REGION-b
# Create health check
gcloud compute health-checks create tcp talos-health-check --port 6443
# Create backend
gcloud compute backend-services create talos-be \
--global \
--protocol TCP \
--health-checks talos-health-check \
--timeout 5m \
--port-name tcp6443
# Add instance group to backend
gcloud compute backend-services add-backend talos-be \
--global \
--instance-group talos-ig \
--instance-group-zone $REGION-b
# Create tcp proxy
gcloud compute target-tcp-proxies create talos-tcp-proxy \
--backend-service talos-be \
--proxy-header NONE
# Create LB IP
gcloud compute addresses create talos-lb-ip --global
# Forward 443 from LB IP to tcp proxy
gcloud compute forwarding-rules create talos-fwd-rule \
--global \
--ports 443 \
--address talos-lb-ip \
--target-tcp-proxy talos-tcp-proxy
# Create firewall rule for health checks
gcloud compute firewall-rules create talos-controlplane-firewall \
--source-ranges 130.211.0.0/22,35.191.0.0/16 \
--target-tags talos-controlplane \
--allow tcp:6443
# Create firewall rule to allow talosctl access
gcloud compute firewall-rules create talos-controlplane-talosctl \
--source-ranges 0.0.0.0/0 \
--target-tags talos-controlplane \
--allow tcp:50000
Cluster Configuration
With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.
LB_PUBLIC_IP=$(gcloud compute forwarding-rules describe talos-fwd-rule \
--global \
--format json \
| jq -r .IPAddress)
talosctl gen config talos-k8s-gcp-tutorial https://${LB_PUBLIC_IP}:443
Additionally, you can specify --config-patch
with RFC6902 jsonpatch which will be applied during the config generation.
Compute Creation
We are now ready to create our GCP nodes.
# Create the control plane nodes.
for i in $( seq 1 3 ); do
gcloud compute instances create talos-controlplane-$i \
--image talos \
--zone $REGION-b \
--tags talos-controlplane \
--boot-disk-size 20GB \
--metadata-from-file=user-data=./controlplane.yaml
--tags talos-controlplane-$i
done
# Add control plane nodes to instance group
for i in $( seq 1 3 ); do
gcloud compute instance-groups unmanaged add-instances talos-ig \
--zone $REGION-b \
--instances talos-controlplane-$i
done
# Create worker
gcloud compute instances create talos-worker-0 \
--image talos \
--zone $REGION-b \
--boot-disk-size 20GB \
--metadata-from-file=user-data=./worker.yaml
--tags talos-worker-$i
Bootstrap Etcd
You should now be able to interact with your cluster with talosctl
.
We will need to discover the public IP for our first control plane node first.
CONTROL_PLANE_0_IP=$(gcloud compute instances describe talos-controlplane-0 \
--zone $REGION-b \
--format json \
| jq -r '.networkInterfaces[0].accessConfigs[0].natIP')
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint $CONTROL_PLANE_0_IP
talosctl --talosconfig talosconfig config node $CONTROL_PLANE_0_IP
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
Cleanup
# cleanup VM's
gcloud compute instances delete \
talos-worker-0 \
talos-controlplane-0 \
talos-controlplane-1 \
talos-controlplane-2
# cleanup firewall rules
gcloud compute firewall-rules delete \
talos-controlplane-talosctl \
talos-controlplane-firewall
# cleanup forwarding rules
gcloud compute forwarding-rules delete \
talos-fwd-rule
# cleanup addresses
gcloud compute addresses delete \
talos-lb-ip
# cleanup proxies
gcloud compute target-tcp-proxies delete \
talos-tcp-proxy
# cleanup backend services
gcloud compute backend-services delete \
talos-be
# cleanup health checks
gcloud compute health-checks delete \
talos-health-check
# cleanup unmanaged instance groups
gcloud compute instance-groups unmanaged delete \
talos-ig
# cleanup Talos image
gcloud compute images delete \
talos
Using GCP Deployment manager
Using GCP deployment manager automatically creates a Google Storage bucket and uploads the Talos image to it.
Once the deployment is complete the generated talosconfig
and kubeconfig
files are uploaded to the bucket.
By default this setup creates a three node control plane and a single worker in us-west1-b
First we need to create a folder to store our deployment manifests and perform all subsequent operations from that folder.
mkdir -p talos-gcp-deployment
cd talos-gcp-deployment
Getting the deployment manifests
We need to download two deployment manifests for the deployment from the Talos github repository.
curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/v1.4/talos-guides/install/cloud-platforms/gcp/config.yaml"
curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/v1.4/talos-guides/install/cloud-platforms/gcp/talos-ha.jinja"
# if using ccm
curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/v1.4/talos-guides/install/cloud-platforms/gcp/gcp-ccm.yaml"
Updating the config
Now we need to update the local config.yaml
file with any required changes such as changing the default zone, Talos version, machine sizes, nodes count etc.
An example config.yaml
file is shown below:
imports:
- path: talos-ha.jinja
resources:
- name: talos-ha
type: talos-ha.jinja
properties:
zone: us-west1-b
talosVersion: v1.4.8
externalCloudProvider: false
controlPlaneNodeCount: 5
controlPlaneNodeType: n1-standard-1
workerNodeCount: 3
workerNodeType: n1-standard-1
outputs:
- name: bucketName
value: $(ref.talos-ha.bucketName)
Enabling external cloud provider
Note: The externalCloudProvider
property is set to false
by default.
The manifest used for deploying the ccm (cloud controller manager) is currently using the GCP ccm provided by openshift since there are no public images for the ccm yet.
Since the routes controller is disabled while deploying the CCM, the CNI pods needs to be restarted after the CCM deployment is complete to remove the
node.kubernetes.io/network-unavailable
taint. See Nodes network-unavailable taint not removed after installing ccm for more information
Use a custom built image for the ccm deployment if required.
Creating the deployment
Now we are ready to create the deployment.
Confirm with y
for any prompts.
Run the following command to create the deployment:
# use a unique name for the deployment, resources are prefixed with the deployment name
export DEPLOYMENT_NAME="<deployment name>"
gcloud deployment-manager deployments create "${DEPLOYMENT_NAME}" --config config.yaml
Retrieving the outputs
First we need to get the deployment outputs.
# first get the outputs
OUTPUTS=$(gcloud deployment-manager deployments describe "${DEPLOYMENT_NAME}" --format json | jq '.outputs[]')
BUCKET_NAME=$(jq -r '. | select(.name == "bucketName").finalValue' <<< "${OUTPUTS}")
# used when cloud controller is enabled
SERVICE_ACCOUNT=$(jq -r '. | select(.name == "serviceAccount").finalValue' <<< "${OUTPUTS}")
PROJECT=$(jq -r '. | select(.name == "project").finalValue' <<< "${OUTPUTS}")
Note: If cloud controller manager is enabled, the below command needs to be run to allow the controller custom role to access cloud resources
gcloud projects add-iam-policy-binding \
"${PROJECT}" \
--member "serviceAccount:${SERVICE_ACCOUNT}" \
--role roles/iam.serviceAccountUser
gcloud projects add-iam-policy-binding \
"${PROJECT}" \
--member serviceAccount:"${SERVICE_ACCOUNT}" \
--role roles/compute.admin
gcloud projects add-iam-policy-binding \
"${PROJECT}" \
--member serviceAccount:"${SERVICE_ACCOUNT}" \
--role roles/compute.loadBalancerAdmin
Downloading talos and kube config
In addition to the talosconfig
and kubeconfig
files, the storage bucket contains the controlplane.yaml
and worker.yaml
files used to join additional nodes to the cluster.
gsutil cp "gs://${BUCKET_NAME}/generated/talosconfig" .
gsutil cp "gs://${BUCKET_NAME}/generated/kubeconfig" .
Deploying the cloud controller manager
kubectl \
--kubeconfig kubeconfig \
--namespace kube-system \
apply \
--filename gcp-ccm.yaml
# wait for the ccm to be up
kubectl \
--kubeconfig kubeconfig \
--namespace kube-system \
rollout status \
daemonset cloud-controller-manager
If the cloud controller manager is enabled, we need to restart the CNI pods to remove the node.kubernetes.io/network-unavailable
taint.
# restart the CNI pods, in this case flannel
kubectl \
--kubeconfig kubeconfig \
--namespace kube-system \
rollout restart \
daemonset kube-flannel
# wait for the pods to be restarted
kubectl \
--kubeconfig kubeconfig \
--namespace kube-system \
rollout status \
daemonset kube-flannel
Check cluster status
kubectl \
--kubeconfig kubeconfig \
get nodes
Cleanup deployment
Warning: This will delete the deployment and all resources associated with it.
Run below if cloud controller manager is enabled
gcloud projects remove-iam-policy-binding \
"${PROJECT}" \
--member "serviceAccount:${SERVICE_ACCOUNT}" \
--role roles/iam.serviceAccountUser
gcloud projects remove-iam-policy-binding \
"${PROJECT}" \
--member serviceAccount:"${SERVICE_ACCOUNT}" \
--role roles/compute.admin
gcloud projects remove-iam-policy-binding \
"${PROJECT}" \
--member serviceAccount:"${SERVICE_ACCOUNT}" \
--role roles/compute.loadBalancerAdmin
Now we can finally remove the deployment
# delete the objects in the bucket first
gsutil -m rm -r "gs://${BUCKET_NAME}"
gcloud deployment-manager deployments delete "${DEPLOYMENT_NAME}" --quiet
1.3.6 - Hetzner
Upload image
Hetzner Cloud does not support uploading custom images. You can email their support to get a Talos ISO uploaded by following issues:3599 or you can prepare image snapshot by yourself.
There are two options to upload your own.
- Run an instance in rescue mode and replace the system OS with the Talos image
- Use Hashicorp packer to prepare an image
Rescue mode
Create a new Server in the Hetzner console.
Enable the Hetzner Rescue System for this server and reboot.
Upon a reboot, the server will boot a special minimal Linux distribution designed for repair and reinstall.
Once running, login to the server using ssh
to prepare the system disk by doing the following:
# Check that you in Rescue mode
df
### Result is like:
# udev 987432 0 987432 0% /dev
# 213.133.99.101:/nfs 308577696 247015616 45817536 85% /root/.oldroot/nfs
# overlay 995672 8340 987332 1% /
# tmpfs 995672 0 995672 0% /dev/shm
# tmpfs 398272 572 397700 1% /run
# tmpfs 5120 0 5120 0% /run/lock
# tmpfs 199132 0 199132 0% /run/user/0
# Download the Talos image
cd /tmp
wget -O /tmp/talos.raw.xz https://github.com/siderolabs/talos/releases/download/v1.4.8/hcloud-amd64.raw.xz
# Replace system
xz -d -c /tmp/talos.raw.xz | dd of=/dev/sda && sync
# shutdown the instance
shutdown -h now
To make sure disk content is consistent, it is recommended to shut the server down before taking an image (snapshot). Once shutdown, simply create an image (snapshot) from the console. You can now use this snapshot to run Talos on the cloud.
Packer
Install packer to the local machine.
Create a config file for packer to use:
# hcloud.pkr.hcl
packer {
required_plugins {
hcloud = {
version = ">= 1.0.0"
source = "github.com/hashicorp/hcloud"
}
}
}
variable "talos_version" {
type = string
default = "v1.4.8"
}
locals {
image = "https://github.com/siderolabs/talos/releases/download/${var.talos_version}/hcloud-amd64.raw.xz"
}
source "hcloud" "talos" {
rescue = "linux64"
image = "debian-11"
location = "hel1"
server_type = "cx11"
ssh_username = "root"
snapshot_name = "talos system disk"
snapshot_labels = {
type = "infra",
os = "talos",
version = "${var.talos_version}",
}
}
build {
sources = ["source.hcloud.talos"]
provisioner "shell" {
inline = [
"apt-get install -y wget",
"wget -O /tmp/talos.raw.xz ${local.image}",
"xz -d -c /tmp/talos.raw.xz | dd of=/dev/sda && sync",
]
}
}
Create a new image by issuing the commands shown below. Note that to create a new API token for your Project, switch into the Hetzner Cloud Console choose a Project, go to Access → Security, and create a new token.
# First you need set API Token
export HCLOUD_TOKEN=${TOKEN}
# Upload image
packer init .
packer build .
# Save the image ID
export IMAGE_ID=<image-id-in-packer-output>
After doing this, you can find the snapshot in the console interface.
Creating a Cluster via the CLI
This section assumes you have the hcloud console utility on your local machine.
# Set hcloud context and api key
hcloud context create talos-tutorial
Create a Load Balancer
Create a load balancer by issuing the commands shown below. Save the IP/DNS name, as this info will be used in the next step.
hcloud load-balancer create --name controlplane --network-zone eu-central --type lb11 --label 'type=controlplane'
### Result is like:
# LoadBalancer 484487 created
# IPv4: 49.12.X.X
# IPv6: 2a01:4f8:X:X::1
hcloud load-balancer add-service controlplane \
--listen-port 6443 --destination-port 6443 --protocol tcp
hcloud load-balancer add-target controlplane \
--label-selector 'type=controlplane'
Create the Machine Configuration Files
Generating Base Configurations
Using the IP/DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines by issuing:
$ talosctl gen config talos-k8s-hcloud-tutorial https://<load balancer IP or DNS>:6443
created controlplane.yaml
created worker.yaml
created talosconfig
At this point, you can modify the generated configs to your liking.
Optionally, you can specify --config-patch
with RFC6902 jsonpatches which will be applied during the config generation.
Validate the Configuration Files
Validate any edited machine configs with:
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config worker.yaml --mode cloud
worker.yaml is valid for cloud mode
Create the Servers
We can now create our servers.
Note that you can find IMAGE_ID
in the snapshot section of the console: https://console.hetzner.cloud/projects/$PROJECT_ID/servers/snapshots
.
Create the Control Plane Nodes
Create the control plane nodes with:
export IMAGE_ID=<your-image-id>
hcloud server create --name talos-control-plane-1 \
--image ${IMAGE_ID} \
--type cx21 --location hel1 \
--label 'type=controlplane' \
--user-data-from-file controlplane.yaml
hcloud server create --name talos-control-plane-2 \
--image ${IMAGE_ID} \
--type cx21 --location fsn1 \
--label 'type=controlplane' \
--user-data-from-file controlplane.yaml
hcloud server create --name talos-control-plane-3 \
--image ${IMAGE_ID} \
--type cx21 --location nbg1 \
--label 'type=controlplane' \
--user-data-from-file controlplane.yaml
Create the Worker Nodes
Create the worker nodes with the following command, repeating (and incrementing the name counter) as many times as desired.
hcloud server create --name talos-worker-1 \
--image ${IMAGE_ID} \
--type cx21 --location hel1 \
--label 'type=worker' \
--user-data-from-file worker.yaml
Bootstrap Etcd
To configure talosctl
we will need the first control plane node’s IP.
This can be found by issuing:
hcloud server list | grep talos-control-plane
Set the endpoints
and nodes
for your talosconfig with:
talosctl --talosconfig talosconfig config endpoint <control-plane-1-IP>
talosctl --talosconfig talosconfig config node <control-plane-1-IP>
Bootstrap etcd
on the first control plane node with:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
1.3.7 - Nocloud
Talos supports nocloud data source implementation.
There are two ways to configure Talos server with nocloud
platform:
- via SMBIOS “serial number” option
- using CDROM or USB-flash filesystem
Note: This requires the nocloud image which can be found on the Github Releases page.
SMBIOS Serial Number
This method requires the network connection to be up (e.g. via DHCP). Configuration is delivered from the HTTP server.
ds=nocloud-net;s=http://10.10.0.1/configs/;h=HOSTNAME
After the network initialization is complete, Talos fetches:
- the machine config from
http://10.10.0.1/configs/user-data
- the network config (if available) from
http://10.10.0.1/configs/network-config
SMBIOS: QEMU
Add the following flag to qemu
command line when starting a VM:
qemu-system-x86_64 \
...\
-smbios type=1,serial=ds=nocloud-net;s=http://10.10.0.1/configs/
SMBIOS: Proxmox
Set the source machine config through the serial number on Proxmox GUI.
The Proxmox stores the VM config at /etc/pve/qemu-server/$ID.conf
($ID
- VM ID number of virtual machine), you will see something like:
...
smbios1: uuid=ceae4d10,serial=ZHM9bm9jbG91ZC1uZXQ7cz1odHRwOi8vMTAuMTAuMC4xL2NvbmZpZ3Mv,base64=1
...
Where serial holds the base64-encoded string version of ds=nocloud-net;s=http://10.10.0.1/configs/
.
CDROM/USB
Talos can also get machine config from local attached storage without any prior network connection being established.
You can provide configs to the server via files on a VFAT or ISO9660 filesystem.
The filesystem volume label must be cidata
or CIDATA
.
Example: QEMU
Create and prepare Talos machine config:
export CONTROL_PLANE_IP=192.168.1.10
talosctl gen config talos-nocloud https://$CONTROL_PLANE_IP:6443 --output-dir _out
Prepare cloud-init configs:
mkdir -p iso
mv _out/controlplane.yaml iso/user-data
echo "local-hostname: controlplane-1" > iso/meta-data
cat > iso/network-config << EOF
version: 1
config:
- type: physical
name: eth0
mac_address: "52:54:00:12:34:00"
subnets:
- type: static
address: 192.168.1.10
netmask: 255.255.255.0
gateway: 192.168.1.254
EOF
Create cloud-init iso image
cd iso && genisoimage -output cidata.iso -V cidata -r -J user-data meta-data network-config
Start the VM
qemu-system-x86_64 \
...
-cdrom iso/cidata.iso \
...
Example: Proxmox
Proxmox can create cloud-init disk for you. Edit the cloud-init config information in Proxmox as follows, substitute your own information as necessary:
and then update cicustom
param at /etc/pve/qemu-server/$ID.conf
.
cicustom: user=local:snippets/controlplane-1.yml
ipconfig0: ip=192.168.1.10/24,gw=192.168.10.254
nameserver: 1.1.1.1
searchdomain: local
Note:
snippets/controlplane-1.yml
is Talos machine config. It is usually located at/var/lib/vz/snippets/controlplane-1.yml
. This file must be placed to this path manually, as Proxmox does not support snippet uploading via API/GUI.
Click on Regenerate Image
button after the above changes are made.
1.3.8 - OpenStack
Creating a Cluster via the CLI
In this guide, we will create an HA Kubernetes cluster in OpenStack with 1 worker node. We will assume an existing some familiarity with OpenStack. If you need more information on OpenStack specifics, please see the official OpenStack documentation.
Environment Setup
You should have an existing openrc file. This file will provide environment variables necessary to talk to your OpenStack cloud. See here for instructions on fetching this file.
Create the Image
First, download the OpenStack image from a Talos release.
These images are called openstack-$ARCH.tar.gz
.
Untar this file with tar -xvf openstack-$ARCH.tar.gz
.
The resulting file will be called disk.raw
.
Upload the Image
Once you have the image, you can upload to OpenStack with:
openstack image create --public --disk-format raw --file disk.raw talos
Network Infrastructure
Load Balancer and Network Ports
Once the image is prepared, you will need to work through setting up the network. Issue the following to create a load balancer, the necessary network ports for each control plane node, and associations between the two.
Creating loadbalancer:
# Create load balancer, updating vip-subnet-id if necessary
openstack loadbalancer create --name talos-control-plane --vip-subnet-id public
# Create listener
openstack loadbalancer listener create --name talos-control-plane-listener --protocol TCP --protocol-port 6443 talos-control-plane
# Pool and health monitoring
openstack loadbalancer pool create --name talos-control-plane-pool --lb-algorithm ROUND_ROBIN --listener talos-control-plane-listener --protocol TCP
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP talos-control-plane-pool
Creating ports:
# Create ports for control plane nodes, updating network name if necessary
openstack port create --network shared talos-control-plane-1
openstack port create --network shared talos-control-plane-2
openstack port create --network shared talos-control-plane-3
# Create floating IPs for the ports, so that you will have talosctl connectivity to each control plane
openstack floating ip create --port talos-control-plane-1 public
openstack floating ip create --port talos-control-plane-2 public
openstack floating ip create --port talos-control-plane-3 public
Note: Take notice of the private and public IPs associated with each of these ports, as they will be used in the next step. Additionally, take node of the port ID, as it will be used in server creation.
Associate port’s private IPs to loadbalancer:
# Create members for each port IP, updating subnet-id and address as necessary.
openstack loadbalancer member create --subnet-id shared-subnet --address <PRIVATE IP OF talos-control-plane-1 PORT> --protocol-port 6443 talos-control-plane-pool
openstack loadbalancer member create --subnet-id shared-subnet --address <PRIVATE IP OF talos-control-plane-2 PORT> --protocol-port 6443 talos-control-plane-pool
openstack loadbalancer member create --subnet-id shared-subnet --address <PRIVATE IP OF talos-control-plane-3 PORT> --protocol-port 6443 talos-control-plane-pool
Security Groups
This example uses the default security group in OpenStack. Ports have been opened to ensure that connectivity from both inside and outside the group is possible. You will want to allow, at a minimum, ports 6443 (Kubernetes API server) and 50000 (Talos API) from external sources. It is also recommended to allow communication over all ports from within the subnet.
Cluster Configuration
With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.
LB_PUBLIC_IP=$(openstack loadbalancer show talos-control-plane -f json | jq -r .vip_address)
talosctl gen config talos-k8s-openstack-tutorial https://${LB_PUBLIC_IP}:6443
Additionally, you can specify --config-patch
with RFC6902 jsonpatch which will be applied during the config generation.
Compute Creation
We are now ready to create our OpenStack nodes.
Create control plane:
# Create control planes 2 and 3, substituting the same info.
for i in $( seq 1 3 ); do
openstack server create talos-control-plane-$i --flavor m1.small --nic port-id=talos-control-plane-$i --image talos --user-data /path/to/controlplane.yaml
done
Create worker:
# Update network name as necessary.
openstack server create talos-worker-1 --flavor m1.small --network shared --image talos --user-data /path/to/worker.yaml
Note: This step can be repeated to add more workers.
Bootstrap Etcd
You should now be able to interact with your cluster with talosctl
.
We will use one of the floating IPs we allocated earlier.
It does not matter which one.
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
1.3.9 - Oracle
Upload image
Oracle Cloud at the moment does not have a Talos official image. So you can use Bring Your Own Image (BYOI) approach.
Once the image is uploaded, set the Boot volume type
to Paravirtualized
mode.
OracleCloud has highly available NTP service, it can be enabled in Talos machine config with:
machine:
time:
servers:
- 169.254.169.254
Creating a Cluster via the CLI
Login to the console. And open the Cloud Shell.
Create a network
export cidr_block=10.0.0.0/16
export subnet_block=10.0.0.0/24
export compartment_id=<substitute-value-of-compartment_id> # https://docs.cloud.oracle.com/en-us/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/network/vcn/create.html#cmdoption-compartment-id
export vcn_id=$(oci network vcn create --cidr-block $cidr_block --display-name talos-example --compartment-id $compartment_id --query data.id --raw-output)
export rt_id=$(oci network subnet create --cidr-block $subnet_block --display-name kubernetes --compartment-id $compartment_id --vcn-id $vcn_id --query data.route-table-id --raw-output)
export ig_id=$(oci network internet-gateway create --compartment-id $compartment_id --is-enabled true --vcn-id $vcn_id --query data.id --raw-output)
oci network route-table update --rt-id $rt_id --route-rules "[{\"cidrBlock\":\"0.0.0.0/0\",\"networkEntityId\":\"$ig_id\"}]" --force
# disable firewall
export sl_id=$(oci network vcn list --compartment-id $compartment_id --query 'data[0]."default-security-list-id"' --raw-output)
oci network security-list update --security-list-id $sl_id --egress-security-rules '[{"destination": "0.0.0.0/0", "protocol": "all", "isStateless": false}]' --ingress-security-rules '[{"source": "0.0.0.0/0", "protocol": "all", "isStateless": false}]' --force
Create a Load Balancer
Create a load balancer by issuing the commands shown below. Save the IP/DNS name, as this info will be used in the next step.
export subnet_id=$(oci network subnet list --compartment-id=$compartment_id --display-name kubernetes --query data[0].id --raw-output)
export network_load_balancer_id=$(oci nlb network-load-balancer create --compartment-id $compartment_id --display-name controlplane-lb --subnet-id $subnet_id --is-preserve-source-destination false --is-private false --query data.id --raw-output)
cat <<EOF > talos-health-checker.json
{
"intervalInMillis": 10000,
"port": 50000,
"protocol": "TCP"
}
EOF
oci nlb backend-set create --health-checker file://talos-health-checker.json --name talos --network-load-balancer-id $network_load_balancer_id --policy TWO_TUPLE --is-preserve-source false
oci nlb listener create --default-backend-set-name talos --name talos --network-load-balancer-id $network_load_balancer_id --port 50000 --protocol TCP
cat <<EOF > controlplane-health-checker.json
{
"intervalInMillis": 10000,
"port": 6443,
"protocol": "HTTPS",
"returnCode": 401,
"urlPath": "/readyz"
}
EOF
oci nlb backend-set create --health-checker file://controlplane-health-checker.json --name controlplane --network-load-balancer-id $network_load_balancer_id --policy TWO_TUPLE --is-preserve-source false
oci nlb listener create --default-backend-set-name controlplane --name controlplane --network-load-balancer-id $network_load_balancer_id --port 6443 --protocol TCP
# Save the external IP
oci nlb network-load-balancer list --compartment-id $compartment_id --display-name controlplane-lb --query 'data.items[0]."ip-addresses"'
Create the Machine Configuration Files
Generating Base Configurations
Using the IP/DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines by issuing:
$ talosctl gen config talos-k8s-oracle-tutorial https://<load balancer IP or DNS>:6443 --additional-sans <load balancer IP or DNS>
created controlplane.yaml
created worker.yaml
created talosconfig
At this point, you can modify the generated configs to your liking.
Optionally, you can specify --config-patch
with RFC6902 jsonpatches which will be applied during the config generation.
Validate the Configuration Files
Validate any edited machine configs with:
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config worker.yaml --mode cloud
worker.yaml is valid for cloud mode
Create the Servers
Create the Control Plane Nodes
Create the control plane nodes with:
export shape='VM.Standard.A1.Flex'
export subnet_id=$(oci network subnet list --compartment-id=$compartment_id --display-name kubernetes --query data[0].id --raw-output)
export image_id=$(oci compute image list --compartment-id $compartment_id --shape $shape --operating-system Talos --limit 1 --query data[0].id --raw-output)
export availability_domain=$(oci iam availability-domain list --compartment-id=$compartment_id --query data[0].name --raw-output)
export network_load_balancer_id=$(oci nlb network-load-balancer list --compartment-id $compartment_id --display-name controlplane-lb --query 'data.items[0].id' --raw-output)
cat <<EOF > shape.json
{
"memoryInGBs": 4,
"ocpus": 1
}
EOF
export instance_id=$(oci compute instance launch --shape $shape --shape-config file://shape.json --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name controlplane-1 --private-ip 10.0.0.11 --assign-public-ip true --launch-options '{"networkType":"PARAVIRTUALIZED"}' --user-data-file controlplane.yaml --query 'data.id' --raw-output)
oci nlb backend create --backend-set-name talos --network-load-balancer-id $network_load_balancer_id --port 50000 --target-id $instance_id
oci nlb backend create --backend-set-name controlplane --network-load-balancer-id $network_load_balancer_id --port 6443 --target-id $instance_id
export instance_id=$(oci compute instance launch --shape $shape --shape-config file://shape.json --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name controlplane-2 --private-ip 10.0.0.12 --assign-public-ip true --launch-options '{"networkType":"PARAVIRTUALIZED"}' --user-data-file controlplane.yaml --query 'data.id' --raw-output)
oci nlb backend create --backend-set-name talos --network-load-balancer-id $network_load_balancer_id --port 50000 --target-id $instance_id
oci nlb backend create --backend-set-name controlplane --network-load-balancer-id $network_load_balancer_id --port 6443 --target-id $instance_id
export instance_id=$(oci compute instance launch --shape $shape --shape-config file://shape.json --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name controlplane-3 --private-ip 10.0.0.13 --assign-public-ip true --launch-options '{"networkType":"PARAVIRTUALIZED"}' --user-data-file controlplane.yaml --query 'data.id' --raw-output)
oci nlb backend create --backend-set-name talos --network-load-balancer-id $network_load_balancer_id --port 50000 --target-id $instance_id
oci nlb backend create --backend-set-name controlplane --network-load-balancer-id $network_load_balancer_id --port 6443 --target-id $instance_id
Create the Worker Nodes
Create the worker nodes with the following command, repeating (and incrementing the name counter) as many times as desired.
export subnet_id=$(oci network subnet list --compartment-id=$compartment_id --display-name kubernetes --query data[0].id --raw-output)
export image_id=$(oci compute image list --compartment-id $compartment_id --operating-system Talos --limit 1 --query data[0].id --raw-output)
export availability_domain=$(oci iam availability-domain list --compartment-id=$compartment_id --query data[0].name --raw-output)
export shape='VM.Standard.E2.1.Micro'
oci compute instance launch --shape $shape --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name worker-1 --assign-public-ip true --user-data-file worker.yaml
oci compute instance launch --shape $shape --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name worker-2 --assign-public-ip true --user-data-file worker.yaml
oci compute instance launch --shape $shape --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name worker-3 --assign-public-ip true --user-data-file worker.yaml
Bootstrap Etcd
To configure talosctl
we will need the first control plane node’s IP.
This can be found by issuing:
export instance_id=$(oci compute instance list --compartment-id $compartment_id --display-name controlplane-1 --query 'data[0].id' --raw-output)
oci compute instance list-vnics --instance-id $instance_id --query 'data[0]."private-ip"' --raw-output
Set the endpoints
and nodes
for your talosconfig with:
talosctl --talosconfig talosconfig config endpoint <load balancer IP or DNS>
talosctl --talosconfig talosconfig config node <control-plane-1-IP>
Bootstrap etcd
on the first control plane node with:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
1.3.10 - Scaleway
Talos is known to work on scaleway.com; however, it is currently undocumented.
1.3.11 - UpCloud
In this guide we will create an HA Kubernetes cluster 3 control plane nodes and 1 worker node. We assume some familiarity with UpCloud. If you need more information on UpCloud specifics, please see the official UpCloud documentation.
Create the Image
The best way to create an image for UpCloud, is to build one using
Hashicorp packer, with the
upcloud-amd64.raw.xz
image found on the Talos Releases.
Using the general ISO is also possible, but the UpCloud image has some UpCloud
specific features implemented, such as the fetching of metadata and user data to configure the nodes.
To create the cluster, you need a few things locally installed:
NOTE: Make sure your account allows API connections. To do so, log into UpCloud control panel and go to People -> Account -> Permissions -> Allow API connections checkbox. It is recommended to create a separate subaccount for your API access and only set the API permission.
To use the UpCloud CLI, you need to create a config in $HOME/.config/upctl.yaml
username: your_upcloud_username
password: your_upcloud_password
To use the UpCloud packer plugin, you need to also export these credentials to your
environment variables, by e.g. putting the following in your .bashrc
or .zshrc
export UPCLOUD_USERNAME="<username>"
export UPCLOUD_PASSWORD="<password>"
Next create a config file for packer to use:
# upcloud.pkr.hcl
packer {
required_plugins {
upcloud = {
version = ">=v1.0.0"
source = "github.com/UpCloudLtd/upcloud"
}
}
}
variable "talos_version" {
type = string
default = "v1.4.8"
}
locals {
image = "https://github.com/siderolabs/talos/releases/download/${var.talos_version}/upcloud-amd64.raw.xz"
}
variable "username" {
type = string
description = "UpCloud API username"
default = "${env("UPCLOUD_USERNAME")}"
}
variable "password" {
type = string
description = "UpCloud API password"
default = "${env("UPCLOUD_PASSWORD")}"
sensitive = true
}
source "upcloud" "talos" {
username = "${var.username}"
password = "${var.password}"
zone = "us-nyc1"
storage_name = "Debian GNU/Linux 11 (Bullseye)"
template_name = "Talos (${var.talos_version})"
}
build {
sources = ["source.upcloud.talos"]
provisioner "shell" {
inline = [
"apt-get install -y wget xz-utils",
"wget -q -O /tmp/talos.raw.xz ${local.image}",
"xz -d -c /tmp/talos.raw.xz | dd of=/dev/vda",
]
}
provisioner "shell-local" {
inline = [
"upctl server stop --type hard custom",
]
}
}
Now create a new image by issuing the commands shown below.
packer init .
packer build .
After doing this, you can find the custom image in the console interface under storage.
Creating a Cluster via the CLI
Create an Endpoint
To communicate with the Talos cluster you will need a single endpoint that is used to access the cluster. This can either be a loadbalancer that will sit in front of all your control plane nodes, a DNS name with one or more A or AAAA records pointing to the control plane nodes, or directly the IP of a control plane node.
Which option is best for you will depend on your needs. Endpoint selection has been further documented here.
After you decide on which endpoint to use, note down the domain name or IP, as we will need it in the next step.
Create the Machine Configuration Files
Generating Base Configurations
Using the DNS name of the endpoint created earlier, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-upcloud-tutorial https://<load balancer IP or DNS>:<port> --install-disk /dev/vda
created controlplane.yaml
created worker.yaml
created talosconfig
At this point, you can modify the generated configs to your liking. Depending on the Kubernetes version you want to run, you might need to select a different Talos version, as not all versions are compatible. You can find the support matrix here.
Optionally, you can specify --config-patch
with RFC6902 jsonpatch or yamlpatch
which will be applied during the config generation.
Validate the Configuration Files
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config worker.yaml --mode cloud
worker.yaml is valid for cloud mode
Create the Servers
Create the Control Plane Nodes
Run the following to create three total control plane nodes:
for ID in $(seq 3); do
upctl server create \
--zone us-nyc1 \
--title talos-us-nyc1-master-$ID \
--hostname talos-us-nyc1-master-$ID \
--plan 2xCPU-4GB \
--os "Talos (v1.4.8)" \
--user-data "$(cat controlplane.yaml)" \
--enable-metada
done
Note: modify the zone and OS depending on your preferences. The OS should match the template name generated with packer in the previous step.
Note the IP address of the first control plane node, as we will need it later.
Create the Worker Nodes
Run the following to create a worker node:
upctl server create \
--zone us-nyc1 \
--title talos-us-nyc1-worker-1 \
--hostname talos-us-nyc1-worker-1 \
--plan 2xCPU-4GB \
--os "Talos (v1.4.8)" \
--user-data "$(cat worker.yaml)" \
--enable-metada
Bootstrap Etcd
To configure talosctl
we will need the first control plane node’s IP, as noted earlier.
We only add one node IP, as that is the entry into our cluster against which our commands will be run.
All requests to other nodes are proxied through the endpoint, and therefore not
all nodes need to be manually added to the config.
You don’t want to run your commands against all nodes, as this can destroy your
cluster if you are not careful (further documentation).
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig
It will take a few minutes before Kubernetes has been fully bootstrapped, and is accessible.
You can check if the nodes are registered in Talos by running
talosctl --talosconfig talosconfig get members
To check if your nodes are ready, run
kubectl get nodes
1.3.12 - Vultr
Creating a Cluster using the Vultr CLI
This guide will demonstrate how to create a highly-available Kubernetes cluster with one worker using the Vultr cloud provider.
Vultr have a very well documented REST API, and an open-source CLI tool to interact with the API which will be used in this guide.
Make sure to follow installation and authentication instructions for the vultr-cli
tool.
Upload image
First step is to make the Talos ISO available to Vultr by uploading the latest release of the ISO to the Vultr ISO server.
vultr-cli iso create --url https://github.com/siderolabs/talos/releases/download/v1.4.8/talos-amd64.iso
Make a note of the ID
in the output, it will be needed later when creating the instances.
Create a Load Balancer
A load balancer is needed to serve as the Kubernetes endpoint for the cluster.
vultr-cli load-balancer create \
--region $REGION \
--label "Talos Kubernetes Endpoint" \
--port 6443 \
--protocol tcp \
--check-interval 10 \
--response-timeout 5 \
--healthy-threshold 5 \
--unhealthy-threshold 3 \
--forwarding-rules frontend_protocol:tcp,frontend_port:443,backend_protocol:tcp,backend_port:6443
Make a note of the ID
of the load balancer from the output of the above command, it will be needed after the control plane instances are created.
vultr-cli load-balancer get $LOAD_BALANCER_ID | grep ^IP
Make a note of the IP
address, it will be needed later when generating the configuration.
Create the Machine Configuration
Generate Base Configuration
Using the IP address (or DNS name if one was created) of the load balancer created above, generate the machine configuration files for the new cluster.
talosctl gen config talos-kubernetes-vultr https://$LOAD_BALANCER_ADDRESS
Once generated, the machine configuration can be modified as necessary for the new cluster, for instance updating disk installation, or adding SANs for the certificates.
Validate the Configuration Files
talosctl validate --config controlplane.yaml --mode cloud
talosctl validate --config worker.yaml --mode cloud
Create the Nodes
Create the Control Plane Nodes
First a control plane needs to be created, with the example below creating 3 instances in a loop.
The instance type (noted by the --plan vc2-2c-4gb
argument) in the example is for a minimum-spec control plane node, and should be updated to suit the cluster being created.
for id in $(seq 3); do
vultr-cli instance create \
--plan vc2-2c-4gb \
--region $REGION \
--iso $TALOS_ISO_ID \
--host talos-k8s-cp${id} \
--label "Talos Kubernetes Control Plane" \
--tags talos,kubernetes,control-plane
done
Make a note of the instance ID
s, as they are needed to attach to the load balancer created earlier.
vultr-cli load-balancer update $LOAD_BALANCER_ID --instances $CONTROL_PLANE_1_ID,$CONTROL_PLANE_2_ID,$CONTROL_PLANE_3_ID
Once the nodes are booted and waiting in maintenance mode, the machine configuration can be applied to each one in turn.
talosctl --talosconfig talosconfig apply-config --insecure --nodes $CONTROL_PLANE_1_ADDRESS --file controlplane.yaml
talosctl --talosconfig talosconfig apply-config --insecure --nodes $CONTROL_PLANE_2_ADDRESS --file controlplane.yaml
talosctl --talosconfig talosconfig apply-config --insecure --nodes $CONTROL_PLANE_3_ADDRESS --file controlplane.yaml
Create the Worker Nodes
Now worker nodes can be created and configured in a similar way to the control plane nodes, the difference being mainly in the machine configuration file.
Note that like with the control plane nodes, the instance type (here set by --plan vc2-1-1gb
) should be changed for the actual cluster requirements.
for id in $(seq 1); do
vultr-cli instance create \
--plan vc2-1c-1gb \
--region $REGION \
--iso $TALOS_ISO_ID \
--host talos-k8s-worker${id} \
--label "Talos Kubernetes Worker" \
--tags talos,kubernetes,worker
done
Once the worker is booted and in maintenance mode, the machine configuration can be applied in the following manner.
talosctl --talosconfig talosconfig apply-config --insecure --nodes $WORKER_1_ADDRESS --file worker.yaml
Bootstrap etcd
Once all the cluster nodes are correctly configured, the cluster can be bootstrapped to become functional.
It is important that the talosctl bootstrap
command be executed only once and against only a single control plane node.
talosctl --talosconfig talosconfig boostrap --endpoints $CONTROL_PLANE_1_ADDRESS --nodes $CONTROL_PLANE_1_ADDRESS
Configure Endpoints and Nodes
While the cluster goes through the bootstrapping process and beings to self-manage, the talosconfig
can be updated with the endpoints and nodes.
talosctl --talosconfig talosconfig config endpoints $CONTROL_PLANE_1_ADDRESS $CONTROL_PLANE_2_ADDRESS $CONTROL_PLANE_3_ADDRESS
talosctl --talosconfig talosconfig config nodes $CONTROL_PLANE_1_ADDRESS $CONTROL_PLANE_2_ADDRESS $CONTROL_PLANE_3_ADDRESS WORKER_1_ADDRESS
Retrieve the kubeconfig
Finally, with the cluster fully running, the administrative kubeconfig
can be retrieved from the Talos API to be saved locally.
talosctl --talosconfig talosconfig kubeconfig .
Now the kubeconfig
can be used by any of the usual Kubernetes tools to interact with the Talos-based Kubernetes cluster as normal.
1.4 - Local Platforms
1.4.1 - Docker
In this guide we will create a Kubernetes cluster in Docker, using a containerized version of Talos.
Running Talos in Docker is intended to be used in CI pipelines, and local testing when you need a quick and easy cluster. Furthermore, if you are running Talos in production, it provides an excellent way for developers to develop against the same version of Talos.
Requirements
The follow are requirements for running Talos in Docker:
- Docker 18.03 or greater
- a recent version of
talosctl
Caveats
Due to the fact that Talos will be running in a container, certain APIs are not available.
For example upgrade
, reset
, and similar APIs don’t apply in container mode.
Further, when running on a Mac in docker, due to networking limitations, VIPs are not supported.
Create the Cluster
Creating a local cluster is as simple as:
talosctl cluster create --wait
Once the above finishes successfully, your talosconfig(~/.talos/config
) will be configured to point to the new cluster.
Note: Startup times can take up to a minute or more before the cluster is available.
Finally, we just need to specify which nodes you want to communicate with using talosctl. Talosctl can operate on one or all the nodes in the cluster – this makes cluster wide commands much easier.
talosctl config nodes 10.5.0.2 10.5.0.3
Using the Cluster
Once the cluster is available, you can make use of talosctl
and kubectl
to interact with the cluster.
For example, to view current running containers, run talosctl containers
for a list of containers in the system
namespace, or talosctl containers -k
for the k8s.io
namespace.
To view the logs of a container, use talosctl logs <container>
or talosctl logs -k <container>
.
Cleaning Up
To cleanup, run:
talosctl cluster destroy
Running Talos in Docker Manually
To run Talos in a container manually, run:
docker run --rm -it \
--name tutorial \
--hostname talos-cp \
--read-only \
--privileged \
--security-opt seccomp=unconfined \
--mount type=tmpfs,destination=/run \
--mount type=tmpfs,destination=/system \
--mount type=tmpfs,destination=/tmp \
--mount type=volume,destination=/system/state \
--mount type=volume,destination=/var \
--mount type=volume,destination=/etc/cni \
--mount type=volume,destination=/etc/kubernetes \
--mount type=volume,destination=/usr/libexec/kubernetes \
--mount type=volume,destination=/usr/etc/udev \
--mount type=volume,destination=/opt \
-e PLATFORM=container \
ghcr.io/siderolabs/talos:v1.4.8
1.4.2 - QEMU
In this guide we will create a Kubernetes cluster using QEMU.
Video Walkthrough
To see a live demo of this writeup, see the video below:
Requirements
- Linux
- a kernel with
- KVM enabled (
/dev/kvm
must exist) CONFIG_NET_SCH_NETEM
enabledCONFIG_NET_SCH_INGRESS
enabled
- KVM enabled (
- at least
CAP_SYS_ADMIN
andCAP_NET_ADMIN
capabilities - QEMU
bridge
,static
andfirewall
CNI plugins from the standard CNI plugins, andtc-redirect-tap
CNI plugin from the awslabs tc-redirect-tap installed to/opt/cni/bin
(installed automatically bytalosctl
)- iptables
/var/run/netns
directory should exist
Installation
How to get QEMU
Install QEMU with your operating system package manager. For example, on Ubuntu for x86:
apt install qemu-system-x86 qemu-kvm
Install talosctl
Download talosctl
via
curl -sL https://talos.dev/install | sh
Install Talos kernel and initramfs
QEMU provisioner depends on Talos kernel (vmlinuz
) and initramfs (initramfs.xz
).
These files can be downloaded from the Talos release:
mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/<version>/vmlinuz-<arch> -L -o _out/vmlinuz-<arch>
curl https://github.com/siderolabs/talos/releases/download/<version>/initramfs-<arch>.xz -L -o _out/initramfs-<arch>.xz
For example version v1.4.8
:
curl https://github.com/siderolabs/talos/releases/download/v1.4.8/vmlinuz-amd64 -L -o _out/vmlinuz-amd64
curl https://github.com/siderolabs/talos/releases/download/v1.4.8/initramfs-amd64.xz -L -o _out/initramfs-amd64.xz
Create the Cluster
For the first time, create root state directory as your user so that you can inspect the logs as non-root user:
mkdir -p ~/.talos/clusters
Create the cluster:
sudo --preserve-env=HOME talosctl cluster create --provisioner qemu
Before the first cluster is created, talosctl
will download the CNI bundle for the VM provisioning and install it to ~/.talos/cni
directory.
Once the above finishes successfully, your talosconfig (~/.talos/config
) will be configured to point to the new cluster, and kubeconfig
will be
downloaded and merged into default kubectl config location (~/.kube/config
).
Cluster provisioning process can be optimized with registry pull-through caches.
Using the Cluster
Once the cluster is available, you can make use of talosctl
and kubectl
to interact with the cluster.
For example, to view current running containers, run talosctl -n 10.5.0.2 containers
for a list of containers in the system
namespace, or talosctl -n 10.5.0.2 containers -k
for the k8s.io
namespace.
To view the logs of a container, use talosctl -n 10.5.0.2 logs <container>
or talosctl -n 10.5.0.2 logs -k <container>
.
A bridge interface will be created, and assigned the default IP 10.5.0.1. Each node will be directly accessible on the subnet specified at cluster creation time. A loadbalancer runs on 10.5.0.1 by default, which handles loadbalancing for the Kubernetes APIs.
You can see a summary of the cluster state by running:
$ talosctl cluster show --provisioner qemu
PROVISIONER qemu
NAME talos-default
NETWORK NAME talos-default
NETWORK CIDR 10.5.0.0/24
NETWORK GATEWAY 10.5.0.1
NETWORK MTU 1500
NODES:
NAME TYPE IP CPU RAM DISK
talos-default-controlplane-1 ControlPlane 10.5.0.2 1.00 1.6 GB 4.3 GB
talos-default-controlplane-2 ControlPlane 10.5.0.3 1.00 1.6 GB 4.3 GB
talos-default-controlplane-3 ControlPlane 10.5.0.4 1.00 1.6 GB 4.3 GB
talos-default-worker-1 Worker 10.5.0.5 1.00 1.6 GB 4.3 GB
Cleaning Up
To cleanup, run:
sudo --preserve-env=HOME talosctl cluster destroy --provisioner qemu
Note: In that case that the host machine is rebooted before destroying the cluster, you may need to manually remove
~/.talos/clusters/talos-default
.
Manual Clean Up
The talosctl cluster destroy
command depends heavily on the clusters state directory.
It contains all related information of the cluster.
The PIDs and network associated with the cluster nodes.
If you happened to have deleted the state folder by mistake or you would like to cleanup the environment, here are the steps how to do it manually:
Remove VM Launchers
Find the process of talosctl qemu-launch
:
ps -elf | grep 'talosctl qemu-launch'
To remove the VMs manually, execute:
sudo kill -s SIGTERM <PID>
Example output, where VMs are running with PIDs 157615 and 157617
ps -elf | grep '[t]alosctl qemu-launch'
0 S root 157615 2835 0 80 0 - 184934 - 07:53 ? 00:00:00 talosctl qemu-launch
0 S root 157617 2835 0 80 0 - 185062 - 07:53 ? 00:00:00 talosctl qemu-launch
sudo kill -s SIGTERM 157615
sudo kill -s SIGTERM 157617
Stopping VMs
Find the process of qemu-system
:
ps -elf | grep 'qemu-system'
To stop the VMs manually, execute:
sudo kill -s SIGTERM <PID>
Example output, where VMs are running with PIDs 158065 and 158216
ps -elf | grep qemu-system
2 S root 1061663 1061168 26 80 0 - 1786238 - 14:05 ? 01:53:56 qemu-system-x86_64 -m 2048 -drive format=raw,if=virtio,file=/home/username/.talos/clusters/talos-default/bootstrap-master.disk -smp cpus=2 -cpu max -nographic -netdev tap,id=net0,ifname=tap0,script=no,downscript=no -device virtio-net-pci,netdev=net0,mac=1e:86:c6:b4:7c:c4 -device virtio-rng-pci -no-reboot -boot order=cn,reboot-timeout=5000 -smbios type=1,uuid=7ec0a73c-826e-4eeb-afd1-39ff9f9160ca -machine q35,accel=kvm
2 S root 1061663 1061170 67 80 0 - 621014 - 21:23 ? 00:00:07 qemu-system-x86_64 -m 2048 -drive format=raw,if=virtio,file=/homeusername/.talos/clusters/talos-default/pxe-1.disk -smp cpus=2 -cpu max -nographic -netdev tap,id=net0,ifname=tap0,script=no,downscript=no -device virtio-net-pci,netdev=net0,mac=36:f3:2f:c3:9f:06 -device virtio-rng-pci -no-reboot -boot order=cn,reboot-timeout=5000 -smbios type=1,uuid=ce12a0d0-29c8-490f-b935-f6073ab916a6 -machine q35,accel=kvm
sudo kill -s SIGTERM 1061663
sudo kill -s SIGTERM 1061663
Remove load balancer
Find the process of talosctl loadbalancer-launch
:
ps -elf | grep 'talosctl loadbalancer-launch'
To remove the LB manually, execute:
sudo kill -s SIGTERM <PID>
Example output, where loadbalancer is running with PID 157609
ps -elf | grep '[t]alosctl loadbalancer-launch'
4 S root 157609 2835 0 80 0 - 184998 - 07:53 ? 00:00:07 talosctl loadbalancer-launch --loadbalancer-addr 10.5.0.1 --loadbalancer-upstreams 10.5.0.2
sudo kill -s SIGTERM 157609
Remove DHCP server
Find the process of talosctl dhcpd-launch
:
ps -elf | grep 'talosctl dhcpd-launch'
To remove the LB manually, execute:
sudo kill -s SIGTERM <PID>
Example output, where loadbalancer is running with PID 157609
ps -elf | grep '[t]alosctl dhcpd-launch'
4 S root 157609 2835 0 80 0 - 184998 - 07:53 ? 00:00:07 talosctl dhcpd-launch --state-path /home/username/.talos/clusters/talos-default --addr 10.5.0.1 --interface talosbd9c32bc
sudo kill -s SIGTERM 157609
Remove network
This is more tricky part as if you have already deleted the state folder.
If you didn’t then it is written in the state.yaml
in the
~/.talos/clusters/<cluster-name>
directory.
sudo cat ~/.talos/clusters/<cluster-name>/state.yaml | grep bridgename
bridgename: talos<uuid>
If you only had one cluster, then it will be the interface with name
talos<uuid>
46: talos<uuid>: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether a6:72:f4:0a:d3:9c brd ff:ff:ff:ff:ff:ff
inet 10.5.0.1/24 brd 10.5.0.255 scope global talos17c13299
valid_lft forever preferred_lft forever
inet6 fe80::a472:f4ff:fe0a:d39c/64 scope link
valid_lft forever preferred_lft forever
To remove this interface:
sudo ip link del talos<uuid>
Remove state directory
To remove the state directory execute:
sudo rm -Rf /home/$USER/.talos/clusters/<cluster-name>
Troubleshooting
Logs
Inspect logs directory
sudo cat ~/.talos/clusters/<cluster-name>/*.log
Logs are saved under <cluster-name>-<role>-<node-id>.log
For example in case of k8s cluster name:
ls -la ~/.talos/clusters/k8s | grep log
-rw-r--r--. 1 root root 69415 Apr 26 20:58 k8s-master-1.log
-rw-r--r--. 1 root root 68345 Apr 26 20:58 k8s-worker-1.log
-rw-r--r--. 1 root root 24621 Apr 26 20:59 lb.log
Inspect logs during the installation
tail -f ~/.talos/clusters/<cluster-name>/*.log
1.4.3 - VirtualBox
In this guide we will create a Kubernetes cluster using VirtualBox.
Video Walkthrough
To see a live demo of this writeup, visit Youtube here:
Installation
How to Get VirtualBox
Install VirtualBox with your operating system package manager or from the website. For example, on Ubuntu for x86:
apt install virtualbox
Install talosctl
You can download talosctl
via
curl -sL https://talos.dev/install | sh
Download ISO Image
Download the ISO image from the Talos release page.
You can download talos-amd64.iso
via
github.com/siderolabs/talos/releases
mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/<version>/talos-<arch>.iso -L -o _out/talos-<arch>.iso
For example version v1.4.8
for linux
platform:
mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/v1.4.8/talos-amd64.iso -L -o _out/talos-amd64.iso
Create VMs
Start by creating a new VM by clicking the “New” button in the VirtualBox UI:
Supply a name for this VM, and specify the Type and Version:
Edit the memory to supply at least 2GB of RAM for the VM:
Proceed through the disk settings, keeping the defaults. You can increase the disk space if desired.
Once created, select the VM and hit “Settings”:
In the “System” section, supply at least 2 CPUs:
In the “Network” section, switch the network “Attached To” section to “Bridged Adapter”:
Finally, in the “Storage” section, select the optical drive and, on the right, select the ISO by browsing your filesystem:
Repeat this process for a second VM to use as a worker node. You can also repeat this for additional nodes desired.
Start Control Plane Node
Once the VMs have been created and updated, start the VM that will be the first control plane node.
This VM will boot the ISO image specified earlier and enter “maintenance mode”.
Once the machine has entered maintenance mode, there will be a console log that details the IP address that the node received.
Take note of this IP address, which will be referred to as $CONTROL_PLANE_IP
for the rest of this guide.
If you wish to export this IP as a bash variable, simply issue a command like export CONTROL_PLANE_IP=1.2.3.4
.
Generate Machine Configurations
With the IP address above, you can now generate the machine configurations to use for installing Talos and Kubernetes. Issue the following command, updating the output directory, cluster name, and control plane IP as you see fit:
talosctl gen config talos-vbox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out
This will create several files in the _out
directory: controlplane.yaml, worker.yaml, and talosconfig.
Create Control Plane Node
Using the controlplane.yaml
generated above, you can now apply this config using talosctl.
Issue:
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file _out/controlplane.yaml
You should now see some action in the VirtualBox console for this VM. Talos will be installed to disk, the VM will reboot, and then Talos will configure the Kubernetes control plane on this VM.
Note: This process can be repeated multiple times to create an HA control plane.
Create Worker Node
Create at least a single worker node using a process similar to the control plane creation above.
Start the worker node VM and wait for it to enter “maintenance mode”.
Take note of the worker node’s IP address, which will be referred to as $WORKER_IP
Issue:
talosctl apply-config --insecure --nodes $WORKER_IP --file _out/worker.yaml
Note: This process can be repeated multiple times to add additional workers.
Using the Cluster
Once the cluster is available, you can make use of talosctl
and kubectl
to interact with the cluster.
For example, to view current running containers, run talosctl containers
for a list of containers in the system
namespace, or talosctl containers -k
for the k8s.io
namespace.
To view the logs of a container, use talosctl logs <container>
or talosctl logs -k <container>
.
First, configure talosctl to talk to your control plane node by issuing the following, updating paths and IPs as necessary:
export TALOSCONFIG="_out/talosconfig"
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP
Bootstrap Etcd
Set the endpoints
and nodes
:
talosctl --talosconfig $TALOSCONFIG config endpoint <control plane 1 IP>
talosctl --talosconfig $TALOSCONFIG config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig $TALOSCONFIG bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig $TALOSCONFIG kubeconfig .
You can then use kubectl in this fashion:
kubectl get nodes
Cleaning Up
To cleanup, simply stop and delete the virtual machines from the VirtualBox UI.
1.5 - Single Board Computers
1.5.1 - Banana Pi M64
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.4.8/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
Download the image and decompress it:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-bananapi_m64-arm64.img.xz
xz -d metal-bananapi_m64-arm64.img.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-bananapi_m64-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
1.5.2 - Friendlyelec Nano PI R4S
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.4.8/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
Download the image and decompress it:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-rockpi_4-arm64.img.xz
xz -d metal-nanopi_r4s-arm64.img.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-nanopi_r4s-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
1.5.3 - Jetson Nano
Prerequisites
You will need
talosctl
- an SD card/USB drive
- crane CLI
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.4.8/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Flashing the firmware to on-board SPI flash
Flashing the firmware only needs to be done once.
We will use the R32.7.2 release for the Jetson Nano.
Most of the instructions is similar to this doc except that we’d be using a upstream version of u-boot
with patches from NVIDIA u-boot so that USB boot also works.
Before flashing we need the following:
- A USB-A to micro USB cable
- A jumper wire to enable recovery mode
- A HDMI monitor to view the logs if the USB serial adapter is not available
- A USB to Serial adapter with 3.3V TTL (optional)
- A 5V DC barrel jack
If you’re planning to use the serial console follow the documentation here
First start by downloading the Jetson Nano L4T release.
curl -SLO https://developer.nvidia.com/embedded/l4t/r32_release_v7.1/t210/jetson-210_linux_r32.7.2_aarch64.tbz2
Next we will extract the L4T release and replace the u-boot
binary with the patched version.
tar xf jetson-210_linux_r32.6.1_aarch64.tbz2
cd Linux_for_Tegra
crane --platform=linux/arm64 export ghcr.io/siderolabs/u-boot:v1.3.0-alpha.0-25-g0ac7773 - | tar xf - --strip-components=1 -C bootloader/t210ref/p3450-0000/ jetson_nano/u-boot.bin
Next we will flash the firmware to the Jetson Nano SPI flash. In order to do that we need to put the Jetson Nano into Force Recovery Mode (FRC). We will use the instructions from here
- Ensure that the Jetson Nano is powered off. There is no need for the SD card/USB storage/network cable to be connected
- Connect the micro USB cable to the micro USB port on the Jetson Nano, don’t plug the other end to the PC yet
- Enable Force Recovery Mode (FRC) by placing a jumper across the FRC pins on the Jetson Nano
- For board revision A02, these are pins
3
and4
of headerJ40
- For board revision B01, these are pins
9
and10
of headerJ50
- For board revision A02, these are pins
- Place another jumper across
J48
to enable power from the DC jack and connect the Jetson Nano to the DC jackJ25
- Now connect the other end of the micro USB cable to the PC and remove the jumper wire from the FRC pins
Now the Jetson Nano is in Force Recovery Mode (FRC) and can be confirmed by running the following command
lsusb | grep -i "nvidia"
Now we can move on the flashing the firmware.
sudo ./flash p3448-0000-max-spi external
This will flash the firmware to the Jetson Nano SPI flash and you’ll see a lot of output. If you’ve connected the serial console you’ll also see the progress there. Once the flashing is done you can disconnect the USB cable and power off the Jetson Nano.
Download the Image
Download the image and decompress it:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-jetson_nano-arm64.img.xz
xz -d metal-jetson_nano-arm64.img.xz
Writing the Image
Now dd
the image to your SD card/USB storage:
sudo dd if=metal-jetson_nano-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M status=progress
| Replace /dev/mmcblk0
with the name of your SD card/USB storage.
Bootstrapping the Node
Insert the SD card/USB storage to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
1.5.4 - Libre Computer Board ALL-H3-CC
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.4.8/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
Download the image and decompress it:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-libretech_all_h3_cc_h5-arm64.img.xz
xz -d metal-libretech_all_h3_cc_h5-arm64.img.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-libretech_all_h3_cc_h5-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
1.5.5 - Pine64
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.4.8/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
Download the image and decompress it:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-pine64-arm64.img.xz
xz -d metal-pine64-arm64.img.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-pine64-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
1.5.6 - Pine64 Rock64
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.4.8/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
Download the image and decompress it:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-rock64-arm64.img.xz
xz -d metal-rock64-arm64.img.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-rock64-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
1.5.7 - Radxa ROCK PI 4
Prerequisites
You will need
talosctl
- an SD card or an eMMC or USB drive or an nVME drive
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.4.8/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
Download the image and decompress it:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-rockpi_4-arm64.img.xz
xz -d metal-rockpi_4-arm64.img.xz
Writing the Image
The path to your SD card/eMMC/USB/nVME can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-rockpi_4-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M
The user has two options to proceed:
- booting from a SD card or eMMC
- booting from a USB or nVME (requires the RockPi board to have the SPI flash)
Booting from SD card or eMMC
Insert the SD card into the board, turn it on and proceed to bootstrapping the node.
Booting from USB or nVME
This requires the user to flash the RockPi SPI flash with u-boot.
This requires the user has access to crane CLI, a spare SD card and optionally access to the RockPi serial console.
- Flash the Rock PI 4c variant of Debian to the SD card.
- Boot into the debian image
- Check that /dev/mtdblock0 exists otherwise the command will silently fail; e.g.
lsblk
. - Download u-boot image from talos u-boot:
mkdir _out
crane --platform=linux/arm64 export ghcr.io/siderolabs/u-boot:v1.3.0-alpha.0-25-g0ac7773 - | tar xf - --strip-components=1 -C _out rockpi_4/rkspi_loader.img
sudo dd if=rkspi_loader.img of=/dev/mtdblock0 bs=4K
- Optionally, you can also write Talos image to the SSD drive right from your Rock PI board:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-rockpi_4-arm64.img.xz
xz -d metal-rockpi_4-arm64.img.xz
sudo dd if=metal-rockpi_4-arm64.img.xz of=/dev/nvme0n1
- remove SD card and reboot.
After these steps, Talos will boot from the nVME/USB and enter maintenance mode. Proceed to bootstrapping the node.
Bootstrapping the Node
Wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
1.5.8 - Radxa ROCK PI 4C
Prerequisites
You will need
talosctl
- an SD card or an eMMC or USB drive or an nVME drive
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.4.8/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
Download the image and decompress it:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-rockpi_4c-arm64.img.xz
xz -d metal-rockpi_4c-arm64.img.xz
Writing the Image
The path to your SD card/eMMC/USB/nVME can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-rockpi_4c-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M
The user has two options to proceed:
- booting from a SD card or eMMC
- booting from a USB or nVME (requires the RockPi board to have the SPI flash)
Booting from SD card or eMMC
Insert the SD card into the board, turn it on and proceed to bootstrapping the node.
Booting from USB or nVME
This requires the user to flash the RockPi SPI flash with u-boot.
This requires the user has access to crane CLI, a spare SD card and optionally access to the RockPi serial console.
- Flash the Rock PI 4c variant of Debian to the SD card.
- Boot into the debian image
- Check that /dev/mtdblock0 exists otherwise the command will silently fail; e.g.
lsblk
. - Download u-boot image from talos u-boot:
mkdir _out
crane --platform=linux/arm64 export ghcr.io/siderolabs/u-boot:v1.3.0-alpha.0-25-g0ac7773 - | tar xf - --strip-components=1 -C _out rockpi_4c/rkspi_loader.img
sudo dd if=rkspi_loader.img of=/dev/mtdblock0 bs=4K
- Optionally, you can also write Talos image to the SSD drive right from your Rock PI board:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-rockpi_4c-arm64.img.xz
xz -d metal-rockpi_4c-arm64.img.xz
sudo dd if=metal-rockpi_4c-arm64.img.xz of=/dev/nvme0n1
- remove SD card and reboot.
After these steps, Talos will boot from the nVME/USB and enter maintenance mode. Proceed to bootstrapping the node.
Bootstrapping the Node
Wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
1.5.9 - Raspberry Pi 4 Model B
The Raspberry Pi specific image is now deprecated and Talos will continue to use a generic image for the various Raspberry Pi models as supported by u-boot
rpi_arm64_defconfig
. Refer to the generic docs available here
Video Walkthrough
To see a live demo of this writeup, see the video below:
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.4.8/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Updating the EEPROM
At least version v2020.09.03-138a1
of the bootloader (rpi-eeprom
) is required.
To update the bootloader we will need an SD card.
Insert the SD card into your computer and use Raspberry Pi Imager
to install the bootloader on it (select Operating System > Misc utility images > Bootloader > SD Card Boot).
Alternatively, you can use the console on Linux or macOS.
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
curl -Lo rpi-boot-eeprom-recovery.zip https://github.com/raspberrypi/rpi-eeprom/releases/download/v2021.04.29-138a1/rpi-boot-eeprom-recovery-2021-04-29-vl805-000138a1.zip
sudo mkfs.fat -I /dev/mmcblk0
sudo mount /dev/mmcblk0p1 /mnt
sudo bsdtar rpi-boot-eeprom-recovery.zip -C /mnt
Remove the SD card from your local machine and insert it into the Raspberry Pi. Power the Raspberry Pi on, and wait at least 10 seconds. If successful, the green LED light will blink rapidly (forever), otherwise an error pattern will be displayed. If an HDMI display is attached to the port closest to the power/USB-C port, the screen will display green for success or red if a failure occurs. Power off the Raspberry Pi and remove the SD card from it.
Note: Updating the bootloader only needs to be done once.
Download the Image
Download the image and decompress it:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-rpi_4-arm64.img.xz
xz -d metal-rpi_4-arm64.img.xz
Writing the Image
Now dd
the image to your SD card:
sudo dd if=metal-rpi_4-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Note: if you have an HDMI display attached and it shows only a rainbow splash, please use the other HDMI port, the one closest to the power/USB-C port.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Troubleshooting
The following table can be used to troubleshoot booting issues:
Long Flashes | Short Flashes | Status |
---|---|---|
0 | 3 | Generic failure to boot |
0 | 4 | start*.elf not found |
0 | 7 | Kernel image not found |
0 | 8 | SDRAM failure |
0 | 9 | Insufficient SDRAM |
0 | 10 | In HALT state |
2 | 1 | Partition not FAT |
2 | 2 | Failed to read from partition |
2 | 3 | Extended partition not FAT |
2 | 4 | File signature/hash mismatch - Pi 4 |
4 | 4 | Unsupported board type |
4 | 5 | Fatal firmware error |
4 | 6 | Power failure type A |
4 | 7 | Power failure type B |
1.5.10 - Raspberry Pi Series
Talos disk image for the Raspberry Pi generic should in theory work for the boards supported by u-boot rpi_arm64_defconfig
.
This has only been officialy tested on the Raspberry Pi 4 and community tested on one variant of the Compute Module 4 using Super 6C boards.
If you have tested this on other Raspberry Pi boards, please let us know.
Video Walkthrough
To see a live demo of this writeup, see the video below:
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -sL 'https://www.talos.dev/install' | bash
Updating the EEPROM
Use Raspberry Pi Imager to write an EEPROM update image to a spare SD card. Select Misc utility images under the Operating System tab.
Remove the SD card from your local machine and insert it into the Raspberry Pi. Power the Raspberry Pi on, and wait at least 10 seconds. If successful, the green LED light will blink rapidly (forever), otherwise an error pattern will be displayed. If an HDMI display is attached to the port closest to the power/USB-C port, the screen will display green for success or red if a failure occurs. Power off the Raspberry Pi and remove the SD card from it.
Note: Updating the bootloader only needs to be done once.
Download the Image
Download the image and decompress it:
curl -LO https://github.com/siderolabs/talos/releases/download/v1.4.8/metal-rpi_generic-arm64.img.xz
xz -d metal-rpi_generic-arm64.img.xz
Writing the Image
Now dd
the image to your SD card:
sudo dd if=metal-rpi_generic-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Note: if you have an HDMI display attached and it shows only a rainbow splash, please use the other HDMI port, the one closest to the power/USB-C port.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Troubleshooting
The following table can be used to troubleshoot booting issues:
Long Flashes | Short Flashes | Status |
---|---|---|
0 | 3 | Generic failure to boot |
0 | 4 | start*.elf not found |
0 | 7 | Kernel image not found |
0 | 8 | SDRAM failure |
0 | 9 | Insufficient SDRAM |
0 | 10 | In HALT state |
2 | 1 | Partition not FAT |
2 | 2 | Failed to read from partition |
2 | 3 | Extended partition not FAT |
2 | 4 | File signature/hash mismatch - Pi 4 |
4 | 4 | Unsupported board type |
4 | 5 | Fatal firmware error |
4 | 6 | Power failure type A |
4 | 7 | Power failure type B |
2 - Configuration
2.1 - Configuration Patches
Talos generates machine configuration for two types of machines: controlplane and worker machines.
Many configuration options can be adjusted using talosctl gen config
but not all of them.
Configuration patching allows modifying machine configuration to fit it for the cluster or a specific machine.
Configuration Patch Formats
Talos supports two configuration patch formats:
- strategic merge patches
- RFC6902 (JSON patches)
Strategic merge patches are the easiest to use, but JSON patches allow more precise configuration adjustments.
Strategic Merge patches
Strategic merge patches look like incomplete machine configuration files:
machine:
network:
hostname: worker1
When applied to the machine configuration, the patch gets merged with the respective section of the machine configuration:
machine:
network:
interfaces:
- interface: eth0
addresses:
- 10.0.0.2/24
hostname: worker1
In general, machine configuration contents are merged with the contents of the strategic merge patch, with strategic merge patch values overriding machine configuration values. There are some special rules:
- If the field value is a list, the patch value is appended to the list, with the following exceptions:
- values of the fields
cluster.network.podSubnets
andcluster.network.serviceSubnets
are overwritten on merge network.interfaces
section is merged with the value in the machine config if there is a match oninterface:
ordeviceSelector:
keysnetwork.interfaces.vlans
section is merged with the value in the machine config if there is a match on thevlanId:
key
- values of the fields
RFC6902 (JSON Patches)
JSON patches can be written either in JSON or YAML format.
A proper JSON patch requires an op
field that depends on the machine configuration contents: whether the path already exists or not.
For example, the strategic merge patch from the previous section can be written either as:
- op: replace
path: /machine/network/hostname
value: worker1
or:
- op: add
path: /machine/network/hostname
value: worker1
The correct op
depends on whether the /machine/network/hostname
section exists already in the machine config or not.
Examples
Machine Network
Base machine configuration:
# ...
machine:
network:
interfaces:
- interface: eth0
dhcp: false
addresses:
- 192.168.10.3/24
The goal is to add a virtual IP 192.168.10.50
to the eth0
interface and add another interface eth1
with DHCP enabled.
machine:
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.10.50
- interface: eth1
dhcp: true
- op: add
path: /machine/network/interfaces/0/vip
value:
ip: 192.168.10.50
- op: add
path: /machine/network/interfaces/-
value:
interface: eth1
dhcp: true
Patched machine configuration:
machine:
network:
interfaces:
- interface: eth0
dhcp: false
addresses:
- 192.168.10.3/24
vip:
ip: 192.168.10.50
- interface: eth1
dhcp: true
Cluster Network
Base machine configuration:
cluster:
network:
dnsDomain: cluster.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/12
The goal is to update pod and service subnets and disable default CNI (Flannel).
cluster:
network:
podSubnets:
- 192.168.0.0/16
serviceSubnets:
- 192.0.0.0/12
cni:
name: none
- op: replace
path: /cluster/network/podSubnets
value:
- 192.168.0.0/16
- op: replace
path: /cluster/network/serviceSubnets
value:
- 192.0.0.0/12
- op: add
path: /cluster/network/cni
value:
name: none
Patched machine configuration:
cluster:
network:
dnsDomain: cluster.local
podSubnets:
- 192.168.0.0/16
serviceSubnets:
- 192.0.0.0/12
cni:
name: none
Kubelet
Base machine configuration:
# ...
machine:
kubelet: {}
The goal is to set the kubelet
node IP to come from the subnet 192.168.10.0/24
.
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.10.0/24
- op: add
path: /machine/kubelet/nodeIP
value:
validSubnets:
- 192.168.10.0/24
Patched machine configuration:
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.10.0/24
Admission Control: Pod Security Policy
Base machine configuration:
cluster:
apiServer:
admissionControl:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1alpha1
defaults:
audit: restricted
audit-version: latest
enforce: baseline
enforce-version: latest
warn: restricted
warn-version: latest
exemptions:
namespaces:
- kube-system
runtimeClasses: []
usernames: []
kind: PodSecurityConfiguration
The goal is to add an exemption for the namespace rook-ceph
.
cluster:
apiServer:
admissionControl:
- name: PodSecurity
configuration:
exemptions:
namespaces:
- rook-ceph
- op: add
path: /cluster/apiServer/admissionControl/0/configuration/exemptions/namespaces/-
value: rook-ceph
Patched machine configuration:
cluster:
apiServer:
admissionControl:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1alpha1
defaults:
audit: restricted
audit-version: latest
enforce: baseline
enforce-version: latest
warn: restricted
warn-version: latest
exemptions:
namespaces:
- kube-system
- rook-ceph
runtimeClasses: []
usernames: []
kind: PodSecurityConfiguration
Configuration Patching with talosctl
CLI
Several talosctl
commands accept config patches as command-line flags.
Config patches might be passed either as an inline value or as a reference to a file with @file.patch
syntax:
talosctl ... --patch '[{"op": "add", "path": "/machine/network/hostname", "value": "worker1"}]' --patch @file.patch
If multiple config patches are specified, they are applied in the order of appearance. The format of the patch (JSON patch or strategic merge patch) is detected automatically.
Talos machine configuration can be patched at the moment of generation with talosctl gen config
:
talosctl gen config test-cluster https://172.20.0.1:6443 --config-patch @all.yaml --config-patch-control-plane @cp.yaml --config-patch-worker @worker.yaml
Generated machine configuration can also be patched after the fact with talosctl machineconfig patch
talosctl machineconfig patch worker.yaml --patch @patch.yaml -o worker1.yaml
Machine configuration on the running Talos node can be patched with talosctl patch
:
talosctl patch mc --nodes 172.20.0.2 --patch @patch.yaml
2.2 - Containerd
The base containerd configuration expects to merge in any additional configs present in /etc/cri/conf.d/20-customization.part
.
Examples
Exposing Metrics
Patch the machine config by adding the following:
machine:
files:
- content: |
[metrics]
address = "0.0.0.0:11234"
path: /etc/cri/conf.d/20-customization.part
op: create
Once the server reboots, metrics are now available:
$ curl ${IP}:11234/v1/metrics
# HELP container_blkio_io_service_bytes_recursive_bytes The blkio io service bytes recursive
# TYPE container_blkio_io_service_bytes_recursive_bytes gauge
container_blkio_io_service_bytes_recursive_bytes{container_id="0677d73196f5f4be1d408aab1c4125cf9e6c458a4bea39e590ac779709ffbe14",device="/dev/dm-0",major="253",minor="0",namespace="k8s.io",op="Async"} 0
container_blkio_io_service_bytes_recursive_bytes{container_id="0677d73196f5f4be1d408aab1c4125cf9e6c458a4bea39e590ac779709ffbe14",device="/dev/dm-0",major="253",minor="0",namespace="k8s.io",op="Discard"} 0
...
...
Pause Image
This change is often required for air-gapped environments, as containerd
CRI plugin has a reference to the pause
image which is used
to create pods, and it can’t be controlled with Kubernetes pod definitions.
machine:
files:
- content: |
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.8"
path: /etc/cri/conf.d/20-customization.part
op: create
Now the pause
image is set to registry.k8s.io/pause:3.8
:
$ talosctl containers --kubernetes
NODE NAMESPACE ID IMAGE PID STATUS
172.20.0.5 k8s.io kube-system/kube-flannel-6hfck registry.k8s.io/pause:3.8 1773 SANDBOX_READY
172.20.0.5 k8s.io └─ kube-system/kube-flannel-6hfck:install-cni:bc39fec3cbac ghcr.io/siderolabs/install-cni:v1.3.0-alpha.0-2-gb155fa0 0 CONTAINER_EXITED
172.20.0.5 k8s.io └─ kube-system/kube-flannel-6hfck:install-config:5c3989353b98 ghcr.io/siderolabs/flannel:v0.20.1 0 CONTAINER_EXITED
172.20.0.5 k8s.io └─ kube-system/kube-flannel-6hfck:kube-flannel:116c67b50da8 ghcr.io/siderolabs/flannel:v0.20.1 2092 CONTAINER_RUNNING
172.20.0.5 k8s.io kube-system/kube-proxy-xp7jq registry.k8s.io/pause:3.8 1780 SANDBOX_READY
172.20.0.5 k8s.io └─ kube-system/kube-proxy-xp7jq:kube-proxy:84fc77c59e17 registry.k8s.io/kube-proxy:v1.26.0-alpha.3 1843 CONTAINER_RUNNING
2.3 - Custom Certificate Authorities
Appending the Certificate Authority
Put into each machine the PEM encoded certificate:
machine:
...
files:
- content: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
permissions: 0644
path: /etc/ssl/certs/ca-certificates
op: append
2.4 - Disk Encryption
It is possible to enable encryption for system disks at the OS level.
Currently, only STATE and EPHEMERAL partitions can be encrypted.
STATE contains the most sensitive node data: secrets and certs.
The EPHEMERAL partition may contain sensitive workload data.
Data is encrypted using LUKS2, which is provided by the Linux kernel modules and cryptsetup
utility.
The operating system will run additional setup steps when encryption is enabled.
If the disk encryption is enabled for the STATE partition, the system will:
- Save STATE encryption config as JSON in the META partition.
- Before mounting the STATE partition, load encryption configs either from the machine config or from the META partition. Note that the machine config is always preferred over the META one.
- Before mounting the STATE partition, format and encrypt it. This occurs only if the STATE partition is empty and has no filesystem.
If the disk encryption is enabled for the EPHEMERAL partition, the system will:
- Get the encryption config from the machine config.
- Before mounting the EPHEMERAL partition, encrypt and format it.
This occurs only if the EPHEMERAL partition is empty and has no filesystem.
Note: Talos Linux disk encryption is designed to guard against data being leaked or recovered from a drive that has been removed from a Talos Linux node. It uses the hardware characteristics of the machine in order to decrypt the data, so drives that have been removed, or recycled from a cloud environment or attached to a different virtual machine, will maintain their protection and encryption. It is not designed to protect against attacks where physical access to the machine, including the drive, is available.
Configuration
Disk encryption is disabled by default. To enable disk encryption you should modify the machine configuration with the following options:
machine:
...
systemDiskEncryption:
ephemeral:
provider: luks2
keys:
- nodeID: {}
slot: 0
state:
provider: luks2
keys:
- nodeID: {}
slot: 0
Encryption Keys
Note: What the LUKS2 docs call “keys” are, in reality, a passphrase. When this passphrase is added, LUKS2 runs argon2 to create an actual key from that passphrase.
LUKS2 supports up to 32 encryption keys and it is possible to specify all of them in the machine configuration. Talos always tries to sync the keys list defined in the machine config with the actual keys defined for the LUKS2 partition. So if you update the keys list, keep at least one key that is not changed to be used for key management.
When you define a key you should specify the key kind and the slot
:
machine:
...
state:
keys:
- nodeID: {} # key kind
slot: 1
ephemeral:
keys:
- static:
passphrase: supersecret
slot: 0
Take a note that key order does not play any role on which key slot is used. Every key must always have a slot defined.
Encryption Key Kinds
Talos supports two kinds of keys:
nodeID
which is generated using the node UUID and the partition label (note that if the node UUID is not really random it will fail the entropy check).static
which you define right in the configuration.
Note: Use static keys only if your STATE partition is encrypted and only for the EPHEMERAL partition. For the STATE partition it will be stored in the META partition, which is not encrypted.
Key Rotation
In order to completely rotate keys, it is necessary to do talosctl apply-config
a couple of times, since there is a need to always maintain a single working key while changing the other keys around it.
So, for example, first add a new key:
machine:
...
ephemeral:
keys:
- static:
passphrase: oldkey
slot: 0
- static:
passphrase: newkey
slot: 1
...
Run:
talosctl apply-config -n <node> -f config.yaml
Then remove the old key:
machine:
...
ephemeral:
keys:
- static:
passphrase: newkey
slot: 1
...
Run:
talosctl apply-config -n <node> -f config.yaml
Going from Unencrypted to Encrypted and Vice Versa
Ephemeral Partition
There is no in-place encryption support for the partitions right now, so to avoid losing data only empty partitions can be encrypted.
As such, migration from unencrypted to encrypted needs some additional handling, especially around explicitly wiping partitions.
apply-config
should be called with--mode=staged
.- Partition should be wiped after
apply-config
, but before the reboot.
Edit your machine config and add the encryption configuration:
vim config.yaml
Apply the configuration with --mode=staged
:
talosctl apply-config -f config.yaml -n <node ip> --mode=staged
Wipe the partition you’re going to encrypt:
talosctl reset --system-labels-to-wipe EPHEMERAL -n <node ip> --reboot=true
That’s it! After you run the last command, the partition will be wiped and the node will reboot. During the next boot the system will encrypt the partition.
State Partition
Calling wipe against the STATE partition will make the node lose the config, so the previous flow is not going to work.
The flow should be to first wipe the STATE partition:
talosctl reset --system-labels-to-wipe STATE -n <node ip> --reboot=true
Node will enter into maintenance mode, then run apply-config
with --insecure
flag:
talosctl apply-config --insecure -n <node ip> -f config.yaml
After installation is complete the node should encrypt the STATE partition.
2.5 - Editing Machine Configuration
Talos node state is fully defined by machine configuration. Initial configuration is delivered to the node at bootstrap time, but configuration can be updated while the node is running.
Note: Be sure that config is persisted so that configuration updates are not overwritten on reboots. Configuration persistence was enabled by default since Talos 0.5 (
persist: true
in machine configuration).
There are three talosctl
commands which facilitate machine configuration updates:
talosctl apply-config
to apply configuration from the filetalosctl edit machineconfig
to launch an editor with existing node configuration, make changes and apply configuration backtalosctl patch machineconfig
to apply automated machine configuration via JSON patch
Each of these commands can operate in one of four modes:
- apply change in automatic mode(default): reboot if the change can’t be applied without a reboot, otherwise apply the change immediately
- apply change with a reboot (
--mode=reboot
): update configuration, reboot Talos node to apply configuration change - apply change immediately (
--mode=no-reboot
flag): change is applied immediately without a reboot, fails if the change contains any fields that can not be updated without a reboot - apply change on next reboot (
--mode=staged
): change is staged to be applied after a reboot, but node is not rebooted - apply change in the interactive mode (
--mode=interactive
; only fortalosctl apply-config
): launches TUI based interactive installer
Note: applying change on next reboot (
--mode=staged
) doesn’t modify current node configuration, so next call totalosctl edit machineconfig --mode=staged
will not see changes
Additionally, there is also talosctl get machineconfig
, which retrieves the current node configuration API resource and contains the machine configuration in the .spec
field.
It can be used to modify the configuration locally before being applied to the node.
The list of config changes allowed to be applied immediately in Talos v1.4.8:
.debug
.cluster
.machine.time
.machine.certCANs
.machine.install
(configuration is only applied during install/upgrade).machine.network
.machine.nodeLabels
.machine.sysfs
.machine.sysctls
.machine.logging
.machine.controlplane
.machine.kubelet
.machine.pods
.machine.kernel
.machine.registries
(CRI containerd plugin will not pick up the registry authentication settings without a reboot).machine.features.kubernetesTalosAPIAccess
talosctl apply-config
This command is traditionally used to submit initial machine configuration generated by talosctl gen config
to the node.
It can also be used to apply configuration to running nodes.
The initial YAML for this is typically obtained using talosctl get machineconfig -o yaml | yq eval .spec >machs.yaml
.
(We must use yq
because for historical reasons, get
returns the configuration as a full resource, while apply-config
only accepts the raw machine config directly.)
Example:
talosctl -n <IP> apply-config -f config.yaml
Command apply-config
can also be invoked as apply machineconfig
:
talosctl -n <IP> apply machineconfig -f config.yaml
Applying machine configuration immediately (without a reboot):
talosctl -n IP apply machineconfig -f config.yaml --mode=no-reboot
Starting the interactive installer:
talosctl -n IP apply machineconfig --mode=interactive
Note: when a Talos node is running in the maintenance mode it’s necessary to provide
--insecure (-i)
flag to connect to the API and apply the config.
taloctl edit machineconfig
Command talosctl edit
loads current machine configuration from the node and launches configured editor to modify the config.
If config hasn’t been changed in the editor (or if updated config is empty), update is not applied.
Note: Talos uses environment variables
TALOS_EDITOR
,EDITOR
to pick up the editor preference. If environment variables are missing,vi
editor is used by default.
Example:
talosctl -n <IP> edit machineconfig
Configuration can be edited for multiple nodes if multiple IP addresses are specified:
talosctl -n <IP1>,<IP2>,... edit machineconfig
Applying machine configuration change immediately (without a reboot):
talosctl -n <IP> edit machineconfig --mode=no-reboot
talosctl patch machineconfig
Command talosctl patch
works similar to talosctl edit
command - it loads current machine configuration, but instead of launching configured editor it applies a set of JSON patches to the configuration and writes the result back to the node.
Example, updating kubelet version (in auto mode):
$ talosctl -n <IP> patch machineconfig -p '[{"op": "replace", "path": "/machine/kubelet/image", "value": "ghcr.io/siderolabs/kubelet:v1.27.4"}]'
patched mc at the node <IP>
Updating kube-apiserver version in immediate mode (without a reboot):
$ talosctl -n <IP> patch machineconfig --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/apiServer/image", "value": "registry.k8s.io/kube-apiserver:v1.27.4"}]'
patched mc at the node <IP>
A patch might be applied to multiple nodes when multiple IPs are specified:
talosctl -n <IP1>,<IP2>,... patch machineconfig -p '[{...}]'
Patches can also be sourced from files using @file
syntax:
talosctl -n <IP> patch machineconfig -p @kubelet-patch.json -p @manifest-patch.json
It might be easier to store patches in YAML format vs. the default JSON format. Talos can detect file format automatically:
# kubelet-patch.yaml
- op: replace
path: /machine/kubelet/image
value: ghcr.io/siderolabs/kubelet:v1.27.4
talosctl -n <IP> patch machineconfig -p @kubelet-patch.yaml
Recovering from Node Boot Failures
If a Talos node fails to boot because of wrong configuration (for example, control plane endpoint is incorrect), configuration can be updated to fix the issue.
2.6 - Logging
Viewing logs
Kernel messages can be retrieved with talosctl dmesg
command:
$ talosctl -n 172.20.1.2 dmesg
172.20.1.2: kern: info: [2021-11-10T10:09:37.662764956Z]: Command line: init_on_alloc=1 slab_nomerge pti=on consoleblank=0 nvme_core.io_timeout=4294967295 printk.devkmsg=on ima_template=ima-ng ima_appraise=fix ima_hash=sha512 console=ttyS0 reboot=k panic=1 talos.shutdown=halt talos.platform=metal talos.config=http://172.20.1.1:40101/config.yaml
[...]
Service logs can be retrieved with talosctl logs
command:
$ talosctl -n 172.20.1.2 services
NODE SERVICE STATE HEALTH LAST CHANGE LAST EVENT
172.20.1.2 apid Running OK 19m27s ago Health check successful
172.20.1.2 containerd Running OK 19m29s ago Health check successful
172.20.1.2 cri Running OK 19m27s ago Health check successful
172.20.1.2 etcd Running OK 19m22s ago Health check successful
172.20.1.2 kubelet Running OK 19m20s ago Health check successful
172.20.1.2 machined Running ? 19m30s ago Service started as goroutine
172.20.1.2 trustd Running OK 19m27s ago Health check successful
172.20.1.2 udevd Running OK 19m28s ago Health check successful
$ talosctl -n 172.20.1.2 logs machined
172.20.1.2: [talos] task setupLogger (1/1): done, 106.109µs
172.20.1.2: [talos] phase logger (1/7): done, 564.476µs
[...]
Container logs for Kubernetes pods can be retrieved with talosctl logs -k
command:
$ talosctl -n 172.20.1.2 containers -k
NODE NAMESPACE ID IMAGE PID STATUS
172.20.1.2 k8s.io kube-system/kube-flannel-dk6d5 registry.k8s.io/pause:3.6 1329 SANDBOX_READY
172.20.1.2 k8s.io └─ kube-system/kube-flannel-dk6d5:install-cni:f1d4cf68feb9 ghcr.io/siderolabs/install-cni:v0.7.0-alpha.0-1-g2bb2efc 0 CONTAINER_EXITED
172.20.1.2 k8s.io └─ kube-system/kube-flannel-dk6d5:install-config:bc39fec3cbac quay.io/coreos/flannel:v0.13.0 0 CONTAINER_EXITED
172.20.1.2 k8s.io └─ kube-system/kube-flannel-dk6d5:kube-flannel:5c3989353b98 quay.io/coreos/flannel:v0.13.0 1610 CONTAINER_RUNNING
172.20.1.2 k8s.io kube-system/kube-proxy-gfkqj registry.k8s.io/pause:3.5 1311 SANDBOX_READY
172.20.1.2 k8s.io └─ kube-system/kube-proxy-gfkqj:kube-proxy:ad5e8ddc7e7f registry.k8s.io/kube-proxy:v1.27.4 1379 CONTAINER_RUNNING
$ talosctl -n 172.20.1.2 logs -k kube-system/kube-proxy-gfkqj:kube-proxy:ad5e8ddc7e7f
172.20.1.2: 2021-11-30T19:13:20.567825192Z stderr F I1130 19:13:20.567737 1 server_others.go:138] "Detected node IP" address="172.20.0.3"
172.20.1.2: 2021-11-30T19:13:20.599684397Z stderr F I1130 19:13:20.599613 1 server_others.go:206] "Using iptables Proxier"
[...]
Sending logs
Service logs
You can enable logs sendings in machine configuration:
machine:
logging:
destinations:
- endpoint: "udp://127.0.0.1:12345/"
format: "json_lines"
- endpoint: "tcp://host:5044/"
format: "json_lines"
Several destinations can be specified.
Supported protocols are UDP and TCP.
The only currently supported format is json_lines
:
{
"msg": "[talos] apply config request: immediate true, on reboot false",
"talos-level": "info",
"talos-service": "machined",
"talos-time": "2021-11-10T10:48:49.294858021Z"
}
Messages are newline-separated when sent over TCP.
Over UDP messages are sent with one message per packet.
msg
, talos-level
, talos-service
, and talos-time
fields are always present; there may be additional fields.
Kernel logs
Kernel log delivery can be enabled with the talos.logging.kernel
kernel command line argument, which can be specified
in the .machine.installer.extraKernelArgs
:
machine:
install:
extraKernelArgs:
- talos.logging.kernel=tcp://host:5044/
Kernel log destination is specified in the same way as service log endpoint.
The only supported format is json_lines
.
Sample message:
{
"clock":6252819, // time relative to the kernel boot time
"facility":"user",
"msg":"[talos] task startAllServices (1/1): waiting for 6 services\n",
"priority":"warning",
"seq":711,
"talos-level":"warn", // Talos-translated `priority` into common logging level
"talos-time":"2021-11-26T16:53:21.3258698Z" // Talos-translated `clock` using current time
}
extraKernelArgs
in the machine configuration are only applied on Talos upgrades, not just by applying the config. (Upgrading to the same version is fine).
Filebeat example
To forward logs to other Log collection services, one way to do this is sending them to a Filebeat running in the cluster itself (in the host network), which takes care of forwarding it to other endpoints (and the necessary transformations).
If Elastic Cloud on Kubernetes is being used, the following Beat (custom resource) configuration might be helpful:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: talos
spec:
type: filebeat
version: 7.15.1
elasticsearchRef:
name: talos
config:
filebeat.inputs:
- type: "udp"
host: "127.0.0.1:12345"
processors:
- decode_json_fields:
fields: ["message"]
target: ""
- timestamp:
field: "talos-time"
layouts:
- "2006-01-02T15:04:05.999999999Z07:00"
- drop_fields:
fields: ["message", "talos-time"]
- rename:
fields:
- from: "msg"
to: "message"
daemonSet:
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
podTemplate:
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
ports:
- protocol: UDP
containerPort: 12345
hostPort: 12345
The input configuration ensures that messages and timestamps are extracted properly. Refer to the Filebeat documentation on how to forward logs to other outputs.
Also note the hostNetwork: true
in the daemonSet
configuration.
This ensures filebeat uses the host network, and listens on 127.0.0.1:12345
(UDP) on every machine, which can then be specified as a logging endpoint in
the machine configuration.
Fluent-bit example
First, we’ll create a value file for the fluentd-bit
Helm chart.
# fluentd-bit.yaml
podAnnotations:
fluentbit.io/exclude: 'true'
extraPorts:
- port: 12345
containerPort: 12345
protocol: TCP
name: talos
config:
service: |
[SERVICE]
Flush 5
Daemon Off
Log_Level warn
Parsers_File custom_parsers.conf
inputs: |
[INPUT]
Name tcp
Listen 0.0.0.0
Port 12345
Format json
Tag talos.*
[INPUT]
Name tail
Alias kubernetes
Path /var/log/containers/*.log
Parser containerd
Tag kubernetes.*
[INPUT]
Name tail
Alias audit
Path /var/log/audit/kube/*.log
Parser audit
Tag audit.*
filters: |
[FILTER]
Name kubernetes
Alias kubernetes
Match kubernetes.*
Kube_Tag_Prefix kubernetes.var.log.containers.
Use_Kubelet Off
Merge_Log On
Merge_Log_Trim On
Keep_Log Off
K8S-Logging.Parser Off
K8S-Logging.Exclude On
Annotations Off
Labels On
[FILTER]
Name modify
Match kubernetes.*
Add source kubernetes
Remove logtag
customParsers: |
[PARSER]
Name audit
Format json
Time_Key requestReceivedTimestamp
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
[PARSER]
Name containerd
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<log>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
outputs: |
[OUTPUT]
Name stdout
Alias stdout
Match *
Format json_lines
# If you wish to ship directly to Loki from Fluentbit,
# Uncomment the following output, updating the Host with your Loki DNS/IP info as necessary.
# [OUTPUT]
# Name loki
# Match *
# Host loki.loki.svc
# Port 3100
# Labels job=fluentbit
# Auto_Kubernetes_Labels on
daemonSetVolumes:
- name: varlog
hostPath:
path: /var/log
daemonSetVolumeMounts:
- name: varlog
mountPath: /var/log
tolerations:
- operator: Exists
effect: NoSchedule
Next, we will add the helm repo for FluentBit, and deploy it to the cluster.
helm repo add fluent https://fluent.github.io/helm-charts
helm upgrade -i --namespace=kube-system -f fluentd-bit.yaml fluent-bit fluent/fluent-bit
Now we need to find the service IP.
$ kubectl -n kube-system get svc -l app.kubernetes.io/name=fluent-bit
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fluent-bit ClusterIP 10.200.0.138 <none> 2020/TCP,5170/TCP 108m
Finally, we will change talos log destination with the command talosctl edit mc
.
machine:
logging:
destinations:
- endpoint: "tcp://10.200.0.138:5170"
format: "json_lines"
This example configuration was well tested with Cilium CNI, and it should work with iptables/ipvs based CNI plugins too.
Vector example
Vector is a lightweight observability pipeline ideal for a Kubernetes environment. It can ingest (source) logs from multiple sources, perform remapping on the logs (transform), and forward the resulting pipeline to multiple destinations (sinks). As it is an end to end platform, it can be run as a single-deployment ‘aggregator’ as well as a replicaSet of ‘Agents’ that run on each node.
As Talos can be set as above to send logs to a destination, we can run Vector as an Aggregator, and forward both kernel and service to a UDP socket in-cluster.
Below is an excerpt of a source/sink setup for Talos, with a ‘sink’ destination of an in-cluster Grafana Loki log aggregation service. As Loki can create labels from the log input, we have set up the Loki sink to create labels based on the host IP, service and facility of the inbound logs.
Note that a method of exposing the Vector service will be required which may vary depending on your setup - a LoadBalancer is a good option.
role: "Stateless-Aggregator"
# Sources
sources:
talos_kernel_logs:
address: 0.0.0.0:6050
type: socket
mode: udp
max_length: 102400
decoding:
codec: json
host_key: __host
talos_service_logs:
address: 0.0.0.0:6051
type: socket
mode: udp
max_length: 102400
decoding:
codec: json
host_key: __host
# Sinks
sinks:
talos_kernel:
type: loki
inputs:
- talos_kernel_logs_xform
endpoint: http://loki.system-monitoring:3100
encoding:
codec: json
except_fields:
- __host
batch:
max_bytes: 1048576
out_of_order_action: rewrite_timestamp
labels:
hostname: >-
{{`{{ __host }}`}}
facility: >-
{{`{{ facility }}`}}
talos_service:
type: loki
inputs:
- talos_service_logs_xform
endpoint: http://loki.system-monitoring:3100
encoding:
codec: json
except_fields:
- __host
batch:
max_bytes: 400000
out_of_order_action: rewrite_timestamp
labels:
hostname: >-
{{`{{ __host }}`}}
service: >-
{{`{{ "talos-service" }}`}}
2.7 - Managing Talos PKI
Generating New Client Configuration
Using Controlplane Node
If you have a valid (not expired) talosconfig
with os:admin
role,
a new client configuration file can be generated with talosctl config new
against
any controlplane node:
talosctl -n CP1 config new talosconfig-reader --roles os:reader --crt-ttl 24h
A specific role and certificate lifetime can be specified.
From Secrets Bundle
If a secrets bundle (secrets.yaml
from talosctl gen secrets
) was saved while
generating machine configuration:
talosctl gen config --with-secrets secrets.yaml --output-types talosconfig -o talosconfig <cluster-name> https://<cluster-endpoint>
Note:
<cluster-name>
and<cluster-endpoint>
arguments don’t matter, as they are not used fortalosconfig
.
From Control Plane Machine Configuration
In order to create a new key pair for client configuration, you will need the root Talos API CA.
The base64 encoded CA can be found in the control plane node’s configuration file.
Save the CA public key, and CA private key as ca.crt
, and ca.key
respectively:
yq eval .machine.ca.crt controlplane.yaml | base64 -d > ca.crt
yq eval .machine.ca.key controlplane.yaml | base64 -d > ca.key
Now, run the following commands to generate a certificate:
talosctl gen key --name admin
talosctl gen csr --key admin.key --ip 127.0.0.1
talosctl gen crt --ca ca --csr admin.csr --name admin
Put the base64-encoded files to the respective location to the talosconfig
:
context: mycluster
contexts:
mycluster:
endpoints:
- CP1
- CP2
ca: <base64-encoded ca.crt>
crt: <base64-encoded admin.crt>
key: <base64-encoded admin.key>
Renewing an Expired Administrator Certificate
By default admin talosconfig
certificate is valid for 365 days, while cluster CAs are valid for 10 years.
In order to prevent admin talosconfig
from expiring, renew the client config before expiration using talosctl config new
command described above.
If the talosconfig
is expired or lost, you can still generate a new one using either the secrets.yaml
secrets bundle or the control plane node’s configuration file using methods described above.
2.8 - NVIDIA Fabric Manager
NVIDIA GPUs that have nvlink support (for eg: A100) will need the nvidia-fabricmanager system extension also enabled in addition to the NVIDIA drivers. For more information on Fabric Manager refer https://docs.nvidia.com/datacenter/tesla/fabric-manager-user-guide/index.html
The published versions of the NVIDIA fabricmanager system extensions is available here
The
nvidia-fabricmanager
extension version has to match with the NVIDIA driver version in use.
Upgrading Talos and enabling the NVIDIA fabricmanager system extension
In addition to the patch defined in the NVIDIA drivers guide, we need to add the nvidia-fabricmanager
system extension to the patch yaml gpu-worker-patch.yaml
:
- op: add
path: /machine/install/extensions
value:
- image: ghcr.io/siderolabs/nvidia-open-gpu-kernel-modules:530.41.03-v1.4.8
- image: ghcr.io/siderolabs/nvidia-container-toolkit:530.41.03-v1.12.1
- image: ghcr.io/siderolabs/nvidia-fabricmanager:525.85.12
- op: add
path: /machine/kernel
value:
modules:
- name: nvidia
- name: nvidia_uvm
- name: nvidia_drm
- name: nvidia_modeset
- op: add
path: /machine/sysctls
value:
net.core.bpf_jit_harden: 1
2.9 - NVIDIA GPU (OSS drivers)
Enabling NVIDIA GPU support on Talos is bound by NVIDIA EULA. The Talos published NVIDIA OSS drivers are bound to a specific Talos release. The extensions versions also needs to be updated when upgrading Talos.
The published versions of the NVIDIA system extensions can be found here:
Upgrading Talos and enabling the NVIDIA modules and the system extension
Make sure to use
talosctl
version v1.4.8 or later
First create a patch yaml gpu-worker-patch.yaml
to update the machine config similar to below:
- op: add
path: /machine/install/extensions
value:
- image: ghcr.io/siderolabs/nvidia-open-gpu-kernel-modules:530.41.03-v1.4.8
- image: ghcr.io/siderolabs/nvidia-container-toolkit:530.41.03-v1.12.1
- op: add
path: /machine/kernel
value:
modules:
- name: nvidia
- name: nvidia_uvm
- name: nvidia_drm
- name: nvidia_modeset
- op: add
path: /machine/sysctls
value:
net.core.bpf_jit_harden: 1
Update the driver version and Talos release in the above patch yaml from the published versions if there is a newer one available. Make sure the driver version matches for both the
nvidia-open-gpu-kernel-modules
andnvidia-container-toolkit
extensions. Thenvidia-open-gpu-kernel-modules
extension is versioned as<nvidia-driver-version>-<talos-release-version>
and thenvidia-container-toolkit
extension is versioned as<nvidia-driver-version>-<nvidia-container-toolkit-version>
.
Now apply the patch to all Talos nodes in the cluster having NVIDIA GPU’s installed:
talosctl patch mc --patch @gpu-worker-patch.yaml
Now we can proceed to upgrading Talos to the same version to enable the system extension:
talosctl upgrade --image=ghcr.io/siderolabs/installer:v1.4.8
Once the node reboots, the NVIDIA modules should be loaded and the system extension should be installed.
This can be confirmed by running:
talosctl read /proc/modules
which should produce an output similar to below:
nvidia_uvm 1146880 - - Live 0xffffffffc2733000 (PO)
nvidia_drm 69632 - - Live 0xffffffffc2721000 (PO)
nvidia_modeset 1142784 - - Live 0xffffffffc25ea000 (PO)
nvidia 39047168 - - Live 0xffffffffc00ac000 (PO)
talosctl get extensions
which should produce an output similar to below:
NODE NAMESPACE TYPE ID VERSION NAME VERSION
172.31.41.27 runtime ExtensionStatus 000.ghcr.io-siderolabs-nvidia-container-toolkit-515.65.01-v1.10.0 1 nvidia-container-toolkit 515.65.01-v1.10.0
172.31.41.27 runtime ExtensionStatus 000.ghcr.io-siderolabs-nvidia-open-gpu-kernel-modules-515.65.01-v1.2.0 1 nvidia-open-gpu-kernel-modules 515.65.01-v1.2.0
talosctl read /proc/driver/nvidia/version
which should produce an output similar to below:
NVRM version: NVIDIA UNIX x86_64 Kernel Module 515.65.01 Wed Mar 16 11:24:05 UTC 2022
GCC version: gcc version 12.2.0 (GCC)
Deploying NVIDIA device plugin
First we need to create the RuntimeClass
Apply the following manifest to create a runtime class that uses the extension:
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: nvidia
handler: nvidia
Install the NVIDIA device plugin:
helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
helm repo update
helm install nvidia-device-plugin nvdp/nvidia-device-plugin --version=0.13.0 --set=runtimeClassName=nvidia
(Optional) Setting the default runtime class as nvidia
Do note that this will set the default runtime class to
nvidia
for all pods scheduled on the node.
Create a patch yaml nvidia-default-runtimeclass.yaml
to update the machine config similar to below:
- op: add
path: /machine/files
value:
- content: |
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "nvidia"
path: /etc/cri/conf.d/20-customization.part
op: create
Now apply the patch to all Talos nodes in the cluster having NVIDIA GPU’s installed:
talosctl patch mc --patch @nvidia-default-runtimeclass.yaml
Testing the runtime class
Note the
spec.runtimeClassName
being explicitly set tonvidia
in the pod spec.
Run the following command to test the runtime class:
kubectl run \
nvidia-test \
--restart=Never \
-ti --rm \
--image nvcr.io/nvidia/cuda:12.1.0-base-ubuntu22.04 \
--overrides '{"spec": {"runtimeClassName": "nvidia"}}' \
nvidia-smi
2.10 - NVIDIA GPU (Proprietary drivers)
Enabling NVIDIA GPU support on Talos is bound by NVIDIA EULA.
The steps to enable NVIDIA support in Talos in v1.4.8 and later differ from previous versions:
- For versions prior to v1.4.8 the steps require that the user build and maintain their own Talos installer image. After the Prerequisites jump to Building the installer image (Talos prior to v1.4.8) and after Upgrading Talos and enabling the NVIDIA modules and the system extension (Talos prior to v1.4.8) continue from Verifying the NVIDIA modules and the system extension.
- for v1.4.8 and later versions building a custom Talos installer image is not required anymore and the new, prefered way to enable NVIDIA support is via an extension.
Prerequisites
This guide assumes the user has access to a container registry with push
permissions, docker installed on the build machine and the Talos host has pull
access to the container registry.
Set the local registry, username and version environment variables:
export USERNAME=<username>
export REGISTRY=<registry>
export TAG=v1.4.8-nvidia
For eg:
export USERNAME=talos-user
export REGISTRY=ghcr.io
The examples below will use the sample variables set above. Modify accordingly for your environment.
Building the NVIDIA extensions
Instead of building the extensions yourself, you can use the extensions published by SideroLabs in the
pkgs
repo here and here
Start by cloning the release-1.5
branch extensions repository.
git clone --depth=1 --branch=release-1.5 https://github.com/siderolabs/extensions.git
Lookup the version of pkgs used for the particular Talos version here
Replace v1.4.8 with the Talos version you are using.
Now run the following command to build and push custom NVIDIA extension.
make nonfree-kmod-nvidia PKGS=<pkgs-version-looked-up-above> PLATFORM=linux/amd64 PUSH=true
Replace the platform with
linux/arm64
if building for ARM64 Make sure to usetalosctl
version v1.4.8 or later
Upgrading Talos and enabling the NVIDIA modules and the system extension
First create a patch yaml gpu-worker-patch.yaml
to update the machine config similar to below:
- op: add
path: /machine/install/extensions
value:
- image: ghcr.io/siderolabs/nonfree-kmod-nvidia:530.41.03-v1.4.8
- image: ghcr.io/siderolabs/nvidia-container-toolkit:530.41.03-v1.12.1
- op: add
path: /machine/kernel
value:
modules:
- name: nvidia
- name: nvidia_uvm
- name: nvidia_drm
- name: nvidia_modeset
- op: add
path: /machine/sysctls
value:
net.core.bpf_jit_harden: 1
Now apply the patch to all Talos nodes in the cluster having NVIDIA GPU’s installed:
talosctl patch mc --patch @gpu-worker-patch.yaml
Now we can proceed to upgrading Talos to the same version to enable the system extension:
talosctl upgrade --image=ghcr.io/siderolabs/installer:v1.4.8
Verifying the NVIDIA modules and the system extension
Once the node reboots, the NVIDIA modules should be loaded and the system extension should be installed.
This can be confirmed by running:
talosctl read /proc/modules
which should produce an output similar to below:
nvidia_uvm 1146880 - - Live 0xffffffffc2733000 (PO)
nvidia_drm 69632 - - Live 0xffffffffc2721000 (PO)
nvidia_modeset 1142784 - - Live 0xffffffffc25ea000 (PO)
nvidia 39047168 - - Live 0xffffffffc00ac000 (PO)
talosctl get extensions
which should produce an output similar to below:
NODE NAMESPACE TYPE ID VERSION NAME VERSION
172.31.41.27 runtime ExtensionStatus 000.ghcr.io-frezbo-nvidia-container-toolkit-510.60.02-v1.9.0 1 nvidia-container-toolkit 510.60.02-v1.9.0
talosctl read /proc/driver/nvidia/version
which should produce an output similar to below:
NVRM version: NVIDIA UNIX x86_64 Kernel Module 510.60.02 Wed Mar 16 11:24:05 UTC 2022
GCC version: gcc version 11.2.0 (GCC)
Deploying NVIDIA device plugin
First we need to create the RuntimeClass
Apply the following manifest to create a runtime class that uses the extension:
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: nvidia
handler: nvidia
Install the NVIDIA device plugin:
helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
helm repo update
helm install nvidia-device-plugin nvdp/nvidia-device-plugin --version=0.13.0 --set=runtimeClassName=nvidia
(Optional) Setting the default runtime class as nvidia
Do note that this will set the default runtime class to
nvidia
for all pods scheduled on the node.
Create a patch yaml nvidia-default-runtimeclass.yaml
to update the machine config similar to below:
- op: add
path: /machine/files
value:
- content: |
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "nvidia"
path: /etc/cri/conf.d/20-customization.part
op: create
Now apply the patch to all Talos nodes in the cluster having NVIDIA GPU’s installed:
talosctl patch mc --patch @nvidia-default-runtimeclass.yaml
Testing the runtime class
Note the
spec.runtimeClassName
being explicitly set tonvidia
in the pod spec.
Run the following command to test the runtime class:
kubectl run \
nvidia-test \
--restart=Never \
-ti --rm \
--image nvcr.io/nvidia/cuda:12.1.0-base-ubuntu22.04 \
--overrides '{"spec": {"runtimeClassName": "nvidia"}}' \
nvidia-smi
Building the installer image (Talos prior to v1.4.8)
Start by cloning the pkgs repository.
Now run the following command to build and push custom Talos kernel image and the NVIDIA image with the NVIDIA kernel modules signed by the kernel built along with it.
make kernel nonfree-kmod-nvidia PLATFORM=linux/amd64 PUSH=true
Replace the platform with
linux/arm64
if building for ARM64
Now we need to create a custom Talos installer image.
Start by creating a Dockerfile
with the following content:
FROM scratch as customization
COPY --from=ghcr.io/talos-user/nonfree-kmod-nvidia:v1.4.8-nvidia /lib/modules /lib/modules
FROM ghcr.io/siderolabs/installer:v1.4.8
COPY --from=ghcr.io/talos-user/kernel:v1.4.8-nvidia /boot/vmlinuz /usr/install/${TARGETARCH}/vmlinuz
Now build the image and push it to the registry.
DOCKER_BUILDKIT=0 docker build --squash --build-arg RM="/lib/modules" -t ghcr.io/talos-user/installer:v1.4.8-nvidia .
docker push ghcr.io/talos-user/installer:v1.4.8-nvidia
Note: buildkit has a bug #816, to disable it use DOCKER_BUILDKIT=0 Replace the platform with
linux/arm64
if building for ARM64
Upgrading Talos and enabling the NVIDIA modules and the system extension (Talos prior to v1.4.8)
Make sure to use
talosctl
version v1.4.8 or later
First create a patch yaml gpu-worker-patch.yaml
to update the machine config similar to below:
- op: add
path: /machine/install/extensions
value:
- image: ghcr.io/siderolabs/nvidia-container-toolkit:530.41.03-v1.12.1
- op: add
path: /machine/kernel
value:
modules:
- name: nvidia
- name: nvidia_uvm
- name: nvidia_drm
- name: nvidia_modeset
- op: add
path: /machine/sysctls
value:
net.core.bpf_jit_harden: 1
Now apply the patch to all Talos nodes in the cluster having NVIDIA GPU’s installed:
talosctl patch mc --patch @gpu-worker-patch.yaml
Now we can proceed to upgrading Talos with the installer built previously:
talosctl upgrade --image=ghcr.io/talos-user/installer:v1.4.8-nvidia
2.11 - Pull Through Image Cache
In this guide we will create a set of local caching Docker registry proxies to minimize local cluster startup time.
When running Talos locally, pulling images from container registries might take a significant amount of time. We spin up local caching pass-through registries to cache images and configure a local Talos cluster to use those proxies. A similar approach might be used to run Talos in production in air-gapped environments. It can be also used to verify that all the images are available in local registries.
Video Walkthrough
To see a live demo of this writeup, see the video below:
Requirements
The follow are requirements for creating the set of caching proxies:
Launch the Caching Docker Registry Proxies
Talos pulls from docker.io
, registry.k8s.io
, gcr.io
, and ghcr.io
by default.
If your configuration is different, you might need to modify the commands below:
docker run -d -p 5000:5000 \
-e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io \
--restart always \
--name registry-docker.io registry:2
docker run -d -p 5001:5000 \
-e REGISTRY_PROXY_REMOTEURL=https://registry.k8s.io \
--restart always \
--name registry-registry.k8s.io registry:2
docker run -d -p 5003:5000 \
-e REGISTRY_PROXY_REMOTEURL=https://gcr.io \
--restart always \
--name registry-gcr.io registry:2
docker run -d -p 5004:5000 \
-e REGISTRY_PROXY_REMOTEURL=https://ghcr.io \
--restart always \
--name registry-ghcr.io registry:2
Note: Proxies are started as docker containers, and they’re automatically configured to start with Docker daemon.
As a registry container can only handle a single upstream Docker registry, we launch a container per upstream, each on its own host port (5000, 5001, 5002, 5003 and 5004).
Using Caching Registries with QEMU
Local Cluster
With a QEMU local cluster, a bridge interface is created on the host. As registry containers expose their ports on the host, we can use bridge IP to direct proxy requests.
sudo talosctl cluster create --provisioner qemu \
--registry-mirror docker.io=http://10.5.0.1:5000 \
--registry-mirror registry.k8s.io=http://10.5.0.1:5001 \
--registry-mirror gcr.io=http://10.5.0.1:5003 \
--registry-mirror ghcr.io=http://10.5.0.1:5004
The Talos local cluster should now start pulling via caching registries.
This can be verified via registry logs, e.g. docker logs -f registry-docker.io
.
The first time cluster boots, images are pulled and cached, so next cluster boot should be much faster.
Note:
10.5.0.1
is a bridge IP with default network (10.5.0.0/24
), if using custom--cidr
, value should be adjusted accordingly.
Using Caching Registries with docker
Local Cluster
With a docker local cluster we can use docker bridge IP, default value for that IP is 172.17.0.1
.
On Linux, the docker bridge address can be inspected with ip addr show docker0
.
talosctl cluster create --provisioner docker \
--registry-mirror docker.io=http://172.17.0.1:5000 \
--registry-mirror registry.k8s.io=http://172.17.0.1:5001 \
--registry-mirror gcr.io=http://172.17.0.1:5003 \
--registry-mirror ghcr.io=http://172.17.0.1:5004
Machine Configuration
The caching registries can be configured via machine configuration patch, equivalent to the command line flags above:
machine:
registries:
mirrors:
docker.io:
endpoints:
- http://10.5.0.1:5000
gcr.io:
endpoints:
- http://10.5.0.1:5003
ghcr.io:
endpoints:
- http://10.5.0.1:5004
registry.k8s.io:
endpoints:
- http://10.5.0.1:5001
Cleaning Up
To cleanup, run:
docker rm -f registry-docker.io
docker rm -f registry-registry.k8s.io
docker rm -f registry-gcr.io
docker rm -f registry-ghcr.io
Note: Removing docker registry containers also removes the image cache. So if you plan to use caching registries, keep the containers running.
Using Harbor as a Caching Registry
Harbor is an open source container registry that can be used as a caching proxy. Harbor supports configuring multiple upstream registries, so it can be used to cache multiple registries at once behind a single endpoint.
As Harbor puts a registry name in the pull image path, we need to set overridePath: true
to prevent Talos and containerd from appending /v2
to the path.
machine:
registries:
mirrors:
docker.io:
endpoints:
- http://harbor/v2/proxy-docker.io
overridePath: true
ghcr.io:
endpoints:
- http://harbor/v2/proxy-ghcr.io
overridePath: true
gcr.io:
endpoints:
- http://harbor/v2/proxy-gcr.io
overridePath: true
registry.k8s.io:
endpoints:
- http://harbor/v2/proxy-registry.k8s.io
overridePath: true
The Harbor external endpoint (http://harbor
in this example) can be configured with authentication or custom TLS:
machine:
registries:
config:
harbor:
auth:
username: admin
password: password
2.12 - Role-based access control (RBAC)
Talos v0.11 introduced initial support for role-based access control (RBAC). This guide will explain what that is and how to enable it without losing access to the cluster.
RBAC in Talos
Talos uses certificates to authorize users. The certificate subject’s organization field is used to encode user roles. There is a set of predefined roles that allow access to different API methods:
os:admin
grants access to all methods;os:operator
grants everythingos:reader
role does, plus additional methods: rebooting, shutting down, etcd backup, etcd alarm management, and so on;os:reader
grants access to “safe” methods (for example, that includes the ability to list files, but does not include the ability to read files content);os:etcd:backup
grants access to/machine.MachineService/EtcdSnapshot
method.
Roles in the current talosconfig
can be checked with the following command:
$ talosctl config info
[...]
Roles: os:admin
[...]
RBAC is enabled by default in new clusters created with talosctl
v0.11+ and disabled otherwise.
Enabling RBAC
First, both the Talos cluster and talosctl
tool should be upgraded.
Then the talosctl config new
command should be used to generate a new client configuration with the os:admin
role.
Additional configurations and certificates for different roles can be generated by passing --roles
flag:
talosctl config new --roles=os:reader reader
That command will create a new client configuration file reader
with a new certificate with os:reader
role.
After that, RBAC should be enabled in the machine configuration:
machine:
features:
rbac: true
2.13 - System Extensions
System extensions allow extending the Talos root filesystem, which enables a variety of features, such as including custom container runtimes, loading additional firmware, etc.
System extensions are only activated during the installation or upgrade of Talos Linux. With system extensions installed, the Talos root filesystem is still immutable and read-only.
Configuration
System extensions are configured in the .machine.install
section:
machine:
install:
extensions:
- image: ghcr.io/siderolabs/gvisor:33f613e
During the initial install (e.g. when PXE booting or booting from an ISO), Talos will pull down container images for system extensions,
validate them, and include them into the Talos initramfs
image.
System extensions will be activated on boot and overlaid on top of the Talos root filesystem.
In order to update the system extensions for a running instance, update .machine.install.extensions
and upgrade Talos.
(Note: upgrading to the same version of Talos is fine).
Building a Talos Image with System Extensions
System extensions can be installed into the Talos disk image (e.g. AWS AMI or VMWare OVF) by running the following command to generate the image from the Talos source tree:
make image-metal IMAGER_SYSTEM_EXTENSIONS="ghcr.io/siderolabs/amd-ucode:20220411 ghcr.io/siderolabs/gvisor:20220405.0-v1.0.0-10-g82b41ad"
Authoring System Extensions
A Talos system extension is a container image with the specific folder structure.
System extensions can be built and managed using any tool that produces container images, e.g. docker build
.
Sidero Labs maintains a repository of system extensions.
Resource Definitions
Use talosctl get extensions
to get a list of system extensions:
$ talosctl get extensions
NODE NAMESPACE TYPE ID VERSION NAME VERSION
172.20.0.2 runtime ExtensionStatus 000.ghcr.io-talos-systems-gvisor-54b831d 1 gvisor 20220117.0-v1.0.0
172.20.0.2 runtime ExtensionStatus 001.ghcr.io-talos-systems-intel-ucode-54b831d 1 intel-ucode microcode-20210608-v1.0.0
Use YAML or JSON format to see additional details about the extension:
$ talosctl -n 172.20.0.2 get extensions 001.ghcr.io-talos-systems-intel-ucode-54b831d -o yaml
node: 172.20.0.2
metadata:
namespace: runtime
type: ExtensionStatuses.runtime.talos.dev
id: 001.ghcr.io-talos-systems-intel-ucode-54b831d
version: 1
owner: runtime.ExtensionStatusController
phase: running
created: 2022-02-10T18:25:04Z
updated: 2022-02-10T18:25:04Z
spec:
image: 001.ghcr.io-talos-systems-intel-ucode-54b831d.sqsh
metadata:
name: intel-ucode
version: microcode-20210608-v1.0.0
author: Spencer Smith
description: |
This system extension provides Intel microcode binaries.
compatibility:
talos:
version: '>= v1.0.0'
Example: gVisor
3 - How Tos
3.1 - How to enable workers on your control plane nodes
By default, Talos Linux taints control plane nodes so that workloads are not schedulable on them.
In order to allow workloads to run on the control plane nodes (useful for single node clusters, or non-production clusters), follow the procedure below.
Modify the MachineConfig for the controlplane nodes to add allowSchedulingOnControlPlanes: true
:
cluster:
allowSchedulingOnControlPlanes: true
This may be done via editing the controlplane.yaml
file before it is applied to the controlplane nodes, by talosctl edit machineconfig
, or by patching the machine config.
Note: if you edit or patch the machine config on a running control plane node to set
allowSchedulingOnControlPlanes: true
, it will be applied immediately, but will not have any effect until the next reboot. You may reboot the nodes viatalosctl reboot
.
You may also immediately make the control plane nodes schedulable by running the below:
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
Note that unless allowSchedulingOnControlPlanes: true
is set in the machine config, the nodes will be tainted again on next reboot.
3.2 - How to scale down a Talos cluster
To remove nodes from a Talos Linux cluster:
talosctl -n <IP.of.node.to.remove> reset
kubectl delete node <nodename>
The command talosctl reset
will cordon and drain the node, leaving etcd
if required, and then erase its disks and power down the system.
This command will also remove the node from registration with the discovery service, so it will no longer show up in talosctl get members
.
It is still necessary to remove the node from Kubernetes, as noted above.
3.3 - How to scale up a Talos cluster
To add more nodes to a Talos Linux cluster, follow the same procedure as when initially creating the cluster:
- boot the new machines to install Talos Linux
- apply the
worker.yaml
orcontrolplane.yaml
configuration files to the new machines
You need the controlplane.yaml
and worker.yaml
that were created when you initially deployed your cluster.
These contain the certificates that enable new machines to join.
Once you have the IP address, you can then apply the correct configuration for each machine you are adding, either worker
or controlplane
.
talosctl apply-config --insecure \
--nodes [NODE IP] \
--file controlplane.yaml
The insecure flag is necessary because the PKI infrastructure has not yet been made available to the node.
You do not need to bootstrap the new node. Regardless of whether you are adding a control plane or worker node, it will now join the cluster in its role.
4 - Network
4.1 - Corporate Proxies
Appending the Certificate Authority of MITM Proxies
Put into each machine the PEM encoded certificate:
machine:
...
files:
- content: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
permissions: 0644
path: /etc/ssl/certs/ca-certificates
op: append
Configuring a Machine to Use the Proxy
To make use of a proxy:
machine:
env:
http_proxy: <http proxy>
https_proxy: <https proxy>
no_proxy: <no proxy>
Additionally, configure the DNS nameservers
, and NTP servers
:
machine:
env:
...
time:
servers:
- <server 1>
- <server ...>
- <server n>
...
network:
nameservers:
- <ip 1>
- <ip ...>
- <ip n>
4.2 - Network Device Selector
Configuring Network Device Using Device Selector
deviceSelector
is an alternative method of configuring a network device:
machine:
...
network:
interfaces:
- deviceSelector:
driver: virtio
hardwareAddr: "00:00:*"
address: 192.168.88.21
Selector has the following traits:
- qualifiers match a device by reading the hardware information in
/sys/class/net/...
- qualifiers are applied using logical
AND
machine.network.interfaces.deviceConfig
option is mutually exclusive withmachine.network.interfaces.interface
- the selector is invalid when it matches multiple devices, the controller will fail and won’t create any devices for the malformed selector
The available hardware information used in the selector can be observed in the LinkStatus
resource (works in maintenance mode):
# talosctl get links eth0 -o yaml
spec:
...
hardwareAddr: 4e:95:8e:8f:e4:47
busPath: 0000:06:00.0
driver: alx
pciID: 1969:E0B1
Using Device Selector for Bonding
Device selectors can be used to configure bonded interfaces:
machine:
...
network:
interfaces:
- interface: bond0
bond:
mode: balance-rr
deviceSelectors:
- hardwareAddr: '00:50:56:8e:8f:e4'
- hardwareAddr: '00:50:57:9c:2c:2d'
In this example, the bond0
interface will be created and bonded using two devices with the specified hardware addresses.
Use Case
machine.network.interfaces.interface
name is generated by the Linux kernel and can be changed after a reboot.
Device names can change when the system has several interfaces of the same kind, e.g: eth0
, eth1
.
In that case pinning it to hardwareAddress
will make Talos reliably configure the device even when interface name changes.
4.3 - Virtual (shared) IP
One of the pain points when building a high-availability controlplane is giving clients a single IP or URL at which they can reach any of the controlplane nodes. The most common approaches - reverse proxy, load balancer, BGP, and DNS - all require external resources, and add complexity in setting up Kubernetes.
To simplify cluster creation, Talos Linux supports a “Virtual” IP (VIP) address to access the Kubernetes API server, providing high availability with no other resources required.
What happens is that the controlplane machines vie for control of the shared IP address using etcd elections. There can be only one owner of the IP address at any given time. If that owner disappears or becomes non-responsive, another owner will be chosen, and it will take up the IP address.
Requirements
The controlplane nodes must share a layer 2 network, and the virtual IP must be assigned from that shared network subnet.
In practical terms, this means that they are all connected via a switch, with no router in between them.
Note that the virtual IP election depends on etcd
being up, as Talos uses etcd
for elections and leadership (control) of the IP address.
The virtual IP is not restricted by ports - you can access any port that the control plane nodes are listening on, on that IP address. Thus it is possible to access the Talos API over the VIP, but it is not recommended, as you cannot access the VIP when etcd is down - and then you could not access the Talos API to recover etcd.
Video Walkthrough
To see a live demo of this writeup, see the video below:
Choose your Shared IP
The Virtual IP should be a reserved, unused IP address in the same subnet as your controlplane nodes. It should not be assigned or assignable by your DHCP server.
For our example, we will assume that the controlplane nodes have the following IP addresses:
192.168.0.10
192.168.0.11
192.168.0.12
We then choose our shared IP to be:
192.168.0.15
Configure your Talos Machines
The shared IP setting is only valid for controlplane nodes.
For the example above, each of the controlplane nodes should have the following Machine Config snippet:
machine:
network:
interfaces:
- interface: eth0
dhcp: true
vip:
ip: 192.168.0.15
Virtual IP’s can also be configured on a VLAN interface.
machine:
network:
interfaces:
- interface: eth0
dhcp: true
vip:
ip: 192.168.0.15
vlans:
- vlanId: 100
dhcp: true
vip:
ip: 192.168.1.15
For your own environment, the interface and the DHCP setting may differ, or you may
use static addressing (cidr
) instead of DHCP.
Caveats
Since VIP functionality relies on etcd
for elections, the shared IP will not come
alive until after you have bootstrapped Kubernetes.
This does mean that you cannot use the
shared IP when issuing the talosctl bootstrap
command (although, as noted above, it is not recommended to access the Talos API via the VIP).
Instead, the bootstrap
command will need to target one of the controlplane nodes
directly.
4.4 - Wireguard Network
Configuring Wireguard Network
Quick Start
The quickest way to try out Wireguard is to use talosctl cluster create
command:
talosctl cluster create --wireguard-cidr 10.1.0.0/24
It will automatically generate Wireguard network configuration for each node with the following network topology:
Where all controlplane nodes will be used as Wireguard servers which listen on port 51111.
All controlplanes and workers will connect to all controlplanes.
It also sets PersistentKeepalive
to 5 seconds to establish controlplanes to workers connection.
After the cluster is deployed it should be possible to verify Wireguard network connectivity.
It is possible to deploy a container with hostNetwork
enabled, then do kubectl exec <container> /bin/bash
and either do:
ping 10.1.0.2
Or install wireguard-tools
package and run:
wg show
Wireguard show should output something like this:
interface: wg0
public key: OMhgEvNIaEN7zeCLijRh4c+0Hwh3erjknzdyvVlrkGM=
private key: (hidden)
listening port: 47946
peer: 1EsxUygZo8/URWs18tqB5FW2cLVlaTA+lUisKIf8nh4=
endpoint: 10.5.0.2:51111
allowed ips: 10.1.0.0/24
latest handshake: 1 minute, 55 seconds ago
transfer: 3.17 KiB received, 3.55 KiB sent
persistent keepalive: every 5 seconds
It is also possible to use generated configuration as a reference by pulling generated config files using:
talosctl read -n 10.5.0.2 /system/state/config.yaml > controlplane.yaml
talosctl read -n 10.5.0.3 /system/state/config.yaml > worker.yaml
Manual Configuration
All Wireguard configuration can be done by changing Talos machine config files. As an example we will use this official Wireguard quick start tutorial.
Key Generation
This part is exactly the same:
wg genkey | tee privatekey | wg pubkey > publickey
Setting up Device
Inline comments show relations between configs and wg
quickstart tutorial commands:
...
network:
interfaces:
...
# ip link add dev wg0 type wireguard
- interface: wg0
mtu: 1500
# ip address add dev wg0 192.168.2.1/24
addresses:
- 192.168.2.1/24
# wg set wg0 listen-port 51820 private-key /path/to/private-key peer ABCDEF... allowed-ips 192.168.88.0/24 endpoint 209.202.254.14:8172
wireguard:
privateKey: <privatekey file contents>
listenPort: 51820
peers:
allowedIPs:
- 192.168.88.0/24
endpoint: 209.202.254.14.8172
publicKey: ABCDEF...
...
When networkd
gets this configuration it will create the device, configure it and will bring it up (equivalent to ip link set up dev wg0
).
All supported config parameters are described in the Machine Config Reference.
5 - Discovery Service
Talos Linux includes node-discovery capabilities that depend on a discovery registry. This allows you to see the members of your cluster, and the associated IP addresses of the nodes.
talosctl get members
NODE NAMESPACE TYPE ID VERSION HOSTNAME MACHINE TYPE OS ADDRESSES
10.5.0.2 cluster Member talos-default-controlplane-1 1 talos-default-controlplane-1 controlplane Talos (v1.2.3) ["10.5.0.2"]
10.5.0.2 cluster Member talos-default-worker-1 1 talos-default-worker-1 worker Talos (v1.2.3) ["10.5.0.3"]
There are currently two supported discovery services: a Kubernetes registry (which stores data in the cluster’s etcd service) and an external registry service. Sidero Labs runs a public external registry service, which is enabled by default. The Kubernetes registry service is disabled by default. The advantage of the external registry service is that it is not dependent on etcd, and thus can inform you of cluster membership even when Kubernetes is down.
Video Walkthrough
To see a live demo of Cluster Discovery, see the video below:
Registries
Peers are aggregated from enabled registries.
By default, Talos will use the service
registry, while the kubernetes
registry is disabled.
To disable a registry, set disabled
to true
(this option is the same for all registries):
For example, to disable the service
registry:
cluster:
discovery:
enabled: true
registries:
service:
disabled: true
Disabling all registries effectively disables member discovery altogether.
An enabled discovery service is required for KubeSpan to function correctly.
The Kubernetes
registry uses Kubernetes Node
resource data and additional Talos annotations:
$ kubectl describe node <nodename>
Annotations: cluster.talos.dev/node-id: Utoh3O0ZneV0kT2IUBrh7TgdouRcUW2yzaaMl4VXnCd
networking.talos.dev/assigned-prefixes: 10.244.0.0/32,10.244.0.1/24
networking.talos.dev/self-ips: 172.20.0.2,fd83:b1f7:fcb5:2802:8c13:71ff:feaf:7c94
...
The Service
registry by default uses a public external Discovery Service to exchange encrypted information about cluster members.
Discovery Service
Sidero Labs maintains a public discovery service at https://discovery.talos.dev/
whereby cluster members use a shared key that is globally unique to coordinate basic connection information (i.e. the set of possible “endpoints”, or IP:port pairs).
We call this data “affiliate data.”
Note: If KubeSpan is enabled the data has the addition of the WireGuard public key.
Data sent to the discovery service is encrypted with AES-GCM encryption and endpoint data is separately encrypted with AES in ECB mode so that endpoints coming from different sources can be deduplicated server-side. Each node submits its own data, plus the endpoints it sees from other peers, to the discovery service. The discovery service aggregates the data, deduplicates the endpoints, and sends updates to each connected peer. Each peer receives information back from the discovery service, decrypts it and uses it to drive KubeSpan and cluster discovery.
Data is stored in memory only. The cluster ID is used as a key to select the affiliates (so that different clusters see different affiliates).
To summarize, the discovery service knows the client version, cluster ID, the number of affiliates, some encrypted data for each affiliate, and a list of encrypted endpoints. The discovery service doesn’t see actual node information – it only stores and updates encrypted blobs. Discovery data is encrypted/decrypted by the clients – the cluster members. The discovery service does not have the encryption key.
The discovery service may, with a commercial license, be operated by your organization and can be downloaded here. In order for nodes to communicate to the discovery service, they must be able to connect to it on TCP port 443.
Resource Definitions
Talos provides seven resources that can be used to introspect the new discovery and KubeSpan features.
Discovery
Identities
The node’s unique identity (base62 encoded random 32 bytes) can be obtained with:
Note: Using base62 allows the ID to be URL encoded without having to use the ambiguous URL-encoding version of base64.
$ talosctl get identities -o yaml
...
spec:
nodeId: Utoh3O0ZneV0kT2IUBrh7TgdouRcUW2yzaaMl4VXnCd
Node identity is used as the unique Affiliate
identifier.
Node identity resource is preserved in the STATE partition in node-identity.yaml
file.
Node identity is preserved across reboots and upgrades, but it is regenerated if the node is reset (wiped).
Affiliates
An affiliate is a proposed member attributed to the fact that the node has the same cluster ID and secret.
$ talosctl get affiliates
ID VERSION HOSTNAME MACHINE TYPE ADDRESSES
2VfX3nu67ZtZPl57IdJrU87BMjVWkSBJiL9ulP9TCnF 2 talos-default-controlplane-2 controlplane ["172.20.0.3","fd83:b1f7:fcb5:2802:986b:7eff:fec5:889d"]
6EVq8RHIne03LeZiJ60WsJcoQOtttw1ejvTS6SOBzhUA 2 talos-default-worker-1 worker ["172.20.0.5","fd83:b1f7:fcb5:2802:cc80:3dff:fece:d89d"]
NVtfu1bT1QjhNq5xJFUZl8f8I8LOCnnpGrZfPpdN9WlB 2 talos-default-worker-2 worker ["172.20.0.6","fd83:b1f7:fcb5:2802:2805:fbff:fe80:5ed2"]
Utoh3O0ZneV0kT2IUBrh7TgdouRcUW2yzaaMl4VXnCd 4 talos-default-controlplane-1 controlplane ["172.20.0.2","fd83:b1f7:fcb5:2802:8c13:71ff:feaf:7c94"]
b3DebkPaCRLTLLWaeRF1ejGaR0lK3m79jRJcPn0mfA6C 2 talos-default-controlplane-3 controlplane ["172.20.0.4","fd83:b1f7:fcb5:2802:248f:1fff:fe5c:c3f"]
One of the Affiliates
with the ID
matching node identity is populated from the node data, other Affiliates
are pulled from the registries.
Enabled discovery registries run in parallel and discovered data is merged to build the list presented above.
Details about data coming from each registry can be queried from the cluster-raw
namespace:
$ talosctl get affiliates --namespace=cluster-raw
ID VERSION HOSTNAME MACHINE TYPE ADDRESSES
k8s/2VfX3nu67ZtZPl57IdJrU87BMjVWkSBJiL9ulP9TCnF 3 talos-default-controlplane-2 controlplane ["172.20.0.3","fd83:b1f7:fcb5:2802:986b:7eff:fec5:889d"]
k8s/6EVq8RHIne03LeZiJ60WsJcoQOtttw1ejvTS6SOBzhUA 2 talos-default-worker-1 worker ["172.20.0.5","fd83:b1f7:fcb5:2802:cc80:3dff:fece:d89d"]
k8s/NVtfu1bT1QjhNq5xJFUZl8f8I8LOCnnpGrZfPpdN9WlB 2 talos-default-worker-2 worker ["172.20.0.6","fd83:b1f7:fcb5:2802:2805:fbff:fe80:5ed2"]
k8s/b3DebkPaCRLTLLWaeRF1ejGaR0lK3m79jRJcPn0mfA6C 3 talos-default-controlplane-3 controlplane ["172.20.0.4","fd83:b1f7:fcb5:2802:248f:1fff:fe5c:c3f"]
service/2VfX3nu67ZtZPl57IdJrU87BMjVWkSBJiL9ulP9TCnF 23 talos-default-controlplane-2 controlplane ["172.20.0.3","fd83:b1f7:fcb5:2802:986b:7eff:fec5:889d"]
service/6EVq8RHIne03LeZiJ60WsJcoQOtttw1ejvTS6SOBzhUA 26 talos-default-worker-1 worker ["172.20.0.5","fd83:b1f7:fcb5:2802:cc80:3dff:fece:d89d"]
service/NVtfu1bT1QjhNq5xJFUZl8f8I8LOCnnpGrZfPpdN9WlB 20 talos-default-worker-2 worker ["172.20.0.6","fd83:b1f7:fcb5:2802:2805:fbff:fe80:5ed2"]
service/b3DebkPaCRLTLLWaeRF1ejGaR0lK3m79jRJcPn0mfA6C 14 talos-default-controlplane-3 controlplane ["172.20.0.4","fd83:b1f7:fcb5:2802:248f:1fff:fe5c:c3f"]
Each Affiliate
ID is prefixed with k8s/
for data coming from the Kubernetes registry and with service/
for data coming from the discovery service.
Members
A member is an affiliate that has been approved to join the cluster. The members of the cluster can be obtained with:
$ talosctl get members
ID VERSION HOSTNAME MACHINE TYPE OS ADDRESSES
talos-default-controlplane-1 2 talos-default-controlplane-1 controlplane Talos (v1.4.8) ["172.20.0.2","fd83:b1f7:fcb5:2802:8c13:71ff:feaf:7c94"]
talos-default-controlplane-2 1 talos-default-controlplane-2 controlplane Talos (v1.4.8) ["172.20.0.3","fd83:b1f7:fcb5:2802:986b:7eff:fec5:889d"]
talos-default-controlplane-3 1 talos-default-controlplane-3 controlplane Talos (v1.4.8) ["172.20.0.4","fd83:b1f7:fcb5:2802:248f:1fff:fe5c:c3f"]
talos-default-worker-1 1 talos-default-worker-1 worker Talos (v1.4.8) ["172.20.0.5","fd83:b1f7:fcb5:2802:cc80:3dff:fece:d89d"]
talos-default-worker-2 1 talos-default-worker-2 worker Talos (v1.4.8) ["172.20.0.6","fd83:b1f7:fcb5:2802:2805:fbff:fe80:5ed2"]
6 - Interactive Dashboard
Interactive dashboard is enabled for all Talos platforms except for SBC images.
The dashboard can be disabled with kernel parameter talos.dashboard.disabled=1
.
The dashboard runs only on the physical video console (not serial console) on the 2nd virtual TTY.
The first virtual TTY shows kernel logs same as in Talos <1.4.0.
The virtual TTYs can be switched with <Alt+F1>
and <Alt+F2>
keys.
Keys <F1>
- <Fn>
can be used to switch between different screens of the dashboard.
The dashboard is using either UEFI framebuffer or VGA/VESA framebuffer (for legacy BIOS boot).
For legacy BIOS boot screen resolution can be controlled with the vga=
kernel parameter.
Summary Screen (F1
)
Interactive Dashboard Summary Screen
The header shows brief information about the node:
- hostname
- Talos version
- uptime
- CPU and memory hardware information
- CPU and memory load, number of processes
Table view presents summary information about the machine:
- UUID (from SMBIOS data)
- Cluster name (when the machine config is available)
- Machine stage:
Installing
,Upgrading
,Booting
,Maintenance
,Running
,Rebooting
,Shutting down
, etc. - Machine stage readiness: checks Talos service status, static pod status, etc. (for
Running
stage) - Machine type: controlplane/worker
- Number of members discovered in the cluster
- Kubernetes version
- Status of Kubernetes components:
kubelet
and Kubernetes controlplane components (only oncontrolplane
machines) - Network information: Hostname, Addresses, Gateway, Connectivity, DNS and NTP servers
Bottom part of the screen shows kernel logs, same as on the virtual TTY 1.
Monitor Screen (F2
)
Interactive Dashboard Monitor Screen
Monitor screen provides live view of the machine resource usage: CPU, memory, disk, network and processes.
Network Config Screen (F3
)
Note: network config screen is only available for
metal
platform.
Interactive Dashboard Network Config Screen
Network config screen provides editing capabilities for the metal
platform network configuration.
The screen is split into three sections:
- the leftmost section provides a way to enter network configuration: hostname, DNS and NTP servers, configure the network interface either via DHCP or static IP address, etc.
- the middle section shows the current network configuration.
- the rightmost section shows the network configuration which will be applied after pressing “Save” button.
Once the platform network configuration is saved, it is immediately applied to the machine.
7 - Resetting a Machine
From time to time, it may be beneficial to reset a Talos machine to its “original” state. Bear in mind that this is a destructive action for the given machine. Doing this means removing the machine from Kubernetes, Etcd (if applicable), and clears any data on the machine that would normally persist a reboot.
CLI
WARNING: Running a
talosctl reset
on cloud VM’s might result in the VM being unable to boot as this wipes the entire disk. It might be more useful to just wipe the STATE and EPHEMERAL partitions on a cloud VM if not booting viaiPXE
.talosctl reset --system-labels-to-wipe STATE --system-labels-to-wipe EPHEMERAL
The API command for doing this is talosctl reset
.
There are a couple of flags as part of this command:
Flags:
--graceful if true, attempt to cordon/drain node and leave etcd (if applicable) (default true)
--reboot if true, reboot the node after resetting instead of shutting down
--system-labels-to-wipe strings if set, just wipe selected system disk partitions by label but keep other partitions intact keep other partitions intact
The graceful
flag is especially important when considering HA vs. non-HA Talos clusters.
If the machine is part of an HA cluster, a normal, graceful reset should work just fine right out of the box as long as the cluster is in a good state.
However, if this is a single node cluster being used for testing purposes, a graceful reset is not an option since Etcd cannot be “left” if there is only a single member.
In this case, reset should be used with --graceful=false
to skip performing checks that would normally block the reset.
Kernel Parameter
Another way to reset a machine is to specify talos.experimental.wipe=system
kernel parameter.
If the machine got stuck in the boot loop and you access to the console you can use GRUB to specify this kernel argument.
Then when Talos boots for the next time it will reset system disk and reboot.
Next steps can be to install Talos either using PXE boot or by mounting an ISO.
8 - Upgrading Talos Linux
OS upgrades are effected by an API call, which can be sent via the talosctl
CLI utility.
The upgrade API call passes a node the installer image to use to perform the upgrade. Each Talos version has a corresponding installer image, listed on the release page for the version, for example v1.4.8.
Upgrades use an A-B image scheme in order to facilitate rollbacks.
This scheme retains the previous Talos kernel and OS image following each upgrade.
If an upgrade fails to boot, Talos will roll back to the previous version.
Likewise, Talos may be manually rolled back via API (or talosctl rollback
), which will update the boot reference and reboot.
Unless explicitly told to preserve
data, an upgrade will cause the node to wipe the EPHEMERAL partition, remove itself from the etcd cluster (if it is a controlplane node), and make itself as pristine as is possible.
(This is the desired behavior except in specialised use cases such as single-node clusters.)
Note An upgrade of the Talos Linux OS will not (since v1.0) apply an upgrade to the Kubernetes version by default. Kubernetes upgrades should be managed separately per upgrading kubernetes.
Supported Upgrade Paths
Because Talos Linux is image based, an upgrade is almost the same as installing Talos, with the difference that the system has already been initialized with a configuration. The supported configuration may change between versions. The upgrade process should handle such changes transparently, but this migration is only tested between adjacent minor releases. Thus the recommended upgrade path is to always upgrade to the latest patch release of all intermediate minor releases.
For example, if upgrading from Talos 1.0 to Talos 1.2.4, the recommended upgrade path would be:
- upgrade from 1.0 to latest patch of 1.0 - to v1.0.6
- upgrade from v1.0.6 to latest patch of 1.1 - to v1.1.2
- upgrade from v1.1.2 to v1.2.4
Before Upgrade to v1.4.8
There are no specific actions to be taken before an upgrade.
Video Walkthrough
To see a live demo of an upgrade of Talos Linux, see the video below:
After Upgrade to v1.4.8
There are no specific actions to be taken after an upgrade.
talosctl upgrade
To upgrade a Talos node, specify the node’s IP address and the installer container image for the version of Talos to upgrade to.
For instance, if your Talos node has the IP address 10.20.30.40
and you want
to install the current version, you would enter a command such
as:
$ talosctl upgrade --nodes 10.20.30.40 \
--image ghcr.io/siderolabs/installer:v1.4.8
There is an option to this command: --preserve
, which will explicitly tell Talos to keep ephemeral data intact.
In most cases, it is correct to let Talos perform its default action of erasing the ephemeral data.
However, for a single-node control-plane, make sure that --preserve=true
.
Rarely, an upgrade command will fail due to a process holding a file open on disk.
In these cases, you can use the --stage
flag.
This puts the upgrade artifacts on disk, and adds some metadata to a disk partition that gets checked very early in the boot process, then reboots the node.
On the reboot, Talos sees that it needs to apply an upgrade, and will do so immediately.
Because this occurs in a just rebooted system, there will be no conflict with any files being held open.
After the upgrade is applied, the node will reboot again, in order to boot into the new version.
Note that because Talos Linux reboots via the kexec
syscall, the extra reboot adds very little time.
Machine Configuration Changes
.machine.network.interfaces.bond
now supports network device selectors for picking up the devices to bond.
Upgrade Sequence
When a Talos node receives the upgrade command, it cordons itself in Kubernetes, to avoid receiving any new workload. It then starts to drain its existing workload.
NOTE: If any of your workloads are sensitive to being shut down ungracefully, be sure to use the lifecycle.preStop
Pod spec.
Once all of the workload Pods are drained, Talos will start shutting down its
internal processes.
If it is a control node, this will include etcd.
If preserve
is not enabled, Talos will leave etcd membership.
(Talos ensures the etcd cluster is healthy and will remain healthy after our node leaves the etcd cluster, before allowing a control plane node to be upgraded.)
Once all the processes are stopped and the services are shut down, the filesystems will be unmounted. This allows Talos to produce a very clean upgrade, as close as possible to a pristine system. We verify the disk and then perform the actual image upgrade. We set the bootloader to boot once with the new kernel and OS image, then we reboot.
After the node comes back up and Talos verifies itself, it will make the bootloader change permanent, rejoin the cluster, and finally uncordon itself to receive new workloads.
FAQs
Q. What happens if an upgrade fails?
A. Talos Linux attempts to safely handle upgrade failures.
The most common failure is an invalid installer image reference. In this case, Talos will fail to download the upgraded image and will abort the upgrade.
Sometimes, Talos is unable to successfully kill off all of the disk access points, in which case it cannot safely unmount all filesystems to effect the upgrade.
In this case, it will abort the upgrade and reboot.
(upgrade --stage
can ensure that upgrades can occur even when the filesytems cannot be unmounted.)
It is possible (especially with test builds) that the upgraded Talos system will fail to start. In this case, the node will be rebooted, and the bootloader will automatically use the previous Talos kernel and image, thus effectively rolling back the upgrade.
Lastly, it is possible that Talos itself will upgrade successfully, start up, and rejoin the cluster but your workload will fail to run on it, for whatever reason.
This is when you would use the talosctl rollback
command to revert back to the previous Talos version.
Q. Can upgrades be scheduled?
A. Because the upgrade sequence is API-driven, you can easily tie it in to your own business logic to schedule and coordinate your upgrades.
Q. Can the upgrade process be observed?
A. Yes, using the talosctl dmesg -f
command.
You can also use talosctl upgrade --wait
, and optionally talosctl upgrade --wait --debug
to observe kernel logs
Q. Are worker node upgrades handled differently from control plane node upgrades?
A. Short answer: no.
Long answer: Both node types follow the same set procedure.
From the user’s standpoint, however, the processes are identical.
However, since control plane nodes run additional services, such as etcd, there are some extra steps and checks performed on them.
For instance, Talos will refuse to upgrade a control plane node if that upgrade would cause a loss of quorum for etcd.
If multiple control plane nodes are asked to upgrade at the same time, Talos will protect the Kubernetes cluster by ensuring only one control plane node actively upgrades at any time, via checking etcd quorum.
If running a single-node cluster, and you want to force an upgrade despite the loss of quorum, you can set preserve
to true
.
Q. Can I break my cluster by upgrading everything at once?
A. Possibly - it’s not recommended.
Nothing prevents the user from sending near-simultaneous upgrades to each node of the cluster - and while Talos Linux and Kubernetes can generally deal with this situation, other components of the cluster may not be able to recover from more than one node rebooting at a time. (e.g. any software that maintains a quorum or state across nodes, such as Rook/Ceph)