This is the multi-page printable view of this section. Click here to print.
Introduction
1 - What is Talos?
Talos is a container optimized Linux distro; a reimagining of Linux for distributed systems such as Kubernetes. Designed to be as minimal as possible while still maintaining practicality. For these reasons, Talos has a number of features unique to it:
- it is immutable
- it is atomic
- it is ephemeral
- it is minimal
- it is secure by default
- it is managed via a single declarative configuration file and gRPC API
Talos can be deployed on container, cloud, virtualized, and bare metal platforms.
Why Talos
In having less, Talos offers more. Security. Efficiency. Resiliency. Consistency.
All of these areas are improved simply by having less.
2 - Quickstart
Local Docker Cluster
The easiest way to try Talos is by using the CLI (talosctl
) to create a cluster on a machine with docker
installed.
Prerequisites
talosctl
Download talosctl
:
curl -sL https://talos.dev/install | sh
kubectl
Download kubectl
via one of methods outlined in the documentation.
Create the Cluster
Now run the following:
talosctl cluster create
You can explore using Talos API commands:
talosctl dashboard --nodes 10.5.0.2
Verify that you can reach Kubernetes:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
talos-default-controlplane-1 Ready master 115s v1.26.3 10.5.0.2 <none> Talos (v1.3.7) <host kernel> containerd://1.5.5
talos-default-worker-1 Ready <none> 115s v1.26.3 10.5.0.3 <none> Talos (v1.3.7) <host kernel> containerd://1.5.5
Destroy the Cluster
When you are all done, remove the cluster:
talosctl cluster destroy
3 - Getting Started
This document will walk you through installing a full Talos Cluster. If this is your first use of Talos Linux, we recommend the Quickstart first, to quickly create a local virtual cluster on your workstation.
Regardless of where you run Talos, in general you need to:
- acquire the installation image
- decide on the endpoint for Kubernetes
- optionally create a load balancer
- configure Talos
- configure
talosctl
- bootstrap Kubernetes
Prerequisites
talosctl
talosctl
is a CLI tool which interfaces with the Talos API in
an easy manner.
Install talosctl
before continuing:
curl -sL https://talos.dev/install | sh
Acquire the installation image
The most general way to install Talos is to use the ISO image (note there are easier methods for some platforms, such as pre-built AMIs for AWS - check the specific Installation Guides.)
The latest ISO image can be found on the Github Releases page:
- X86: https://github.com/siderolabs/talos/releases/download/v1.3.7/talos-amd64.iso
- ARM64: https://github.com/siderolabs/talos/releases/download/v1.3.7/talos-arm64.iso
When booted from the ISO, Talos will run in RAM, and will not install itself until it is provided a configuration. Thus, it is safe to boot the ISO onto any machine.
Alternative Booting
For network booting and self-built media, you can use the published kernel and initramfs images:
Note that to use alternate booting, there are a number of required kernel parameters. Please see the kernel docs for more information.
Decide the Kubernetes Endpoint
In order to configure Kubernetes, Talos needs to know what the endpoint (DNS name or IP address) of the Kubernetes API Server will be.
The endpoint should be the fully-qualified HTTP(S) URL for the Kubernetes API Server, which (by default) runs on port 6443 using HTTPS.
Thus, the format of the endpoint may be something like:
https://192.168.0.10:6443
https://kube.mycluster.mydomain.com:6443
https://[2001:db8:1234::80]:6443
The Kubernetes API Server endpoint, in order to be highly available, should be configured in a way that functions off all available control plane nodes. There are three common ways to do this:
Dedicated Load-balancer
If you are using a cloud provider or have your own load-balancer (such as HAProxy, nginx reverse proxy, or an F5 load-balancer), using a dedicated load balancer is a natural choice. Create an appropriate frontend matching the endpoint, and point the backends at the addresses of each of the Talos control plane nodes. (Note that given we have not yet created the control plane nodes, the IP addresses of the backends may not be known yet. We can bind the backends to the frontend at a later point.)
Layer 2 Shared IP
Talos has integrated support for serving Kubernetes from a shared/virtual IP address. This method relies on Layer 2 connectivity between control plane Talos nodes.
In this case, we choose an unused IP address on the same subnet as the Talos control plane nodes. For instance, if your control plane node IPs are:
- 192.168.0.10
- 192.168.0.11
- 192.168.0.12
you could choose the ip 192.168.0.15
as your shared IP address.
(Make sure that 192.168.0.15
is not used by any other machine and that your DHCP server
will not serve it to any other machine.)
Once chosen, form the full HTTPS URL from this IP:
https://192.168.0.15:6443
If you create a DNS record for this IP, note you will need to use the IP address itself, not the DNS name, to configure the shared IP (machine.network.interfaces[].vip.ip
) in the Talos configuration.
For more information about using a shared IP, see the related Guide
DNS records
You can use DNS records to provide a measure of redundancy. In this case, you would add multiple A or AAAA records (one for each control plane node) to a DNS name.
For instance, you could add:
kube.cluster1.mydomain.com IN A 192.168.0.10
kube.cluster1.mydomain.com IN A 192.168.0.11
kube.cluster1.mydomain.com IN A 192.168.0.12
Then, your endpoint would be:
https://kube.cluster1.mydomain.com:6443
Decide how to access the Talos API
Many administrative tasks are performed by calling the Talos API on Talos Linux control plane nodes.
We recommend directly accessing the control plane nodes from the talosctl
client, if possible (i.e. set your endpoints
to the IP addresses of the control plane nodes).
This requires your control plane nodes to be reachable from the client IP.
If the control plane nodes are not directly reachable from the workstation where you run talosctl
, then configure a load balancer for TCP port 50000 to be forwarded to the control plane nodes.
Do not use Talos Linux’s built in VIP support for accessing the Talos API, as it will not function in the event of an etcd
failure, and you will not be able to access the Talos API to fix things.
If you create a load balancer to forward the Talos API calls, make a note of the IP or
hostname so that you can configure your talosctl
tool’s endpoints
below.
Configure Talos
When Talos boots without a configuration, such as when using the Talos ISO, it enters a limited maintenance mode and waits for a configuration to be provided.
In other installation methods, a configuration can be passed in on boot.
For example, Talos can be booted with the talos.config
kernel
commandline argument set to an HTTP(s) URL from which it should receive its
configuration.
Where a PXE server is available, this is much more efficient than
manually configuring each node.
If you do use this method, note that Talos requires a number of other
kernel commandline parameters.
See required kernel parameters.
If creating EC2 kubernetes clusters, the configuration file can be passed in as --user-data
to the aws ec2 run-instances
command.
In any case, we need to generate the configuration which is to be provided. We start with generating a secrets bundle which should be saved in a secure location and used to generate machine or client configuration at any time:
talosctl gen secrets -o secrets.yaml
Now, we can generate the machine configuration for each node:
talosctl gen config --with-secrets secrets.yaml <cluster-name> <cluster-endpoint>
Here, cluster-name
is an arbitrary name for the cluster, used
in your local client configuration as a label.
It should be unique in the configuration on your local workstation.
The cluster-endpoint
is the Kubernetes Endpoint you
selected from above.
This is the Kubernetes API URL, and it should be a complete URL, with https://
and port.
(The default port is 6443
, but you may have configured your load balancer to forward a different port.)
For example:
$ talosctl gen config --with-secrets secrets.yaml my-cluster https://192.168.64.15:6443
generating PKI and tokens
created /Users/taloswork/controlplane.yaml
created /Users/taloswork/worker.yaml
created /Users/taloswork/talosconfig
When you run this command, a number of files are created in your current directory:
controlplane.yaml
worker.yaml
talosconfig
The .yaml
files are Machine Configs.
They provide Talos Linux servers their complete configuration,
describing everything from what disk Talos should be installed on, to network settings.
The controlplane.yaml
file describes how Talos should form a Kubernetes cluster.
The talosconfig
file (which is also YAML) is your local client configuration file.
Controlplane and Worker
The two types of Machine Configs correspond to the two roles of Talos nodes, control plane (which run both the Talos and Kubernetes control planes) and worker nodes (which run the workloads).
The main difference between Controlplane Machine Config files and Worker Machine Config files is that the former contains information about how to form the Kubernetes cluster.
Modifying the Machine configs
The generated Machine Configs have defaults that work for many cases.
They use DHCP for interface configuration, and install to /dev/sda
.
If the defaults work for your installation, you may use them as is.
Sometimes, you will need to modify the generated files so they work with your systems. A common example is needing to change the default installation disk. If you try to to apply the machine config to a node, and get an error like the below, you need to specify a different installation disk:
$ talosctl apply-config --insecure -n 192.168.64.8 --file controlplane.yaml
error applying new configuration: rpc error: code = InvalidArgument desc = configuration validation failed: 1 error occurred:
* specified install disk does not exist: "/dev/sda"
You can verify which disks your nodes have by using the talosctl disks --insecure
command.
Insecure mode is needed at this point as the PKI infrastructure has not yet been set up.
For example:
$ talosctl -n 192.168.64.8 disks --insecure
DEV MODEL SERIAL TYPE UUID WWID MODALIAS NAME SIZE BUS_PATH
/dev/vda - - HDD - - virtio:d00000002v00001AF4 - 69 GB /pci0000:00/0000:00:06.0/virtio2/
In this case, you would modiy the controlplane.yaml
and worker.yaml
and edit the line:
install:
disk: /dev/sda # The disk used for installations.
to reflect vda
instead of sda
.
Customizing Machine Configuration
The generated machine configuration provides sane defaults for most cases, but machine configuration can be modified to fit specific needs.
Some machine configuration options are available as flags for the talosctl gen config
command,
for example setting a specific Kubernetes version:
talosctl gen config --with-secrets secrets.yaml --kubernetes-version 1.25.4 my-cluster https://192.168.64.15:6443
Other modifications are done with machine configuration patches.
Machine configuration patches can be applied with talosctl gen config
command:
talosctl gen config --with-secrets secrets.yaml --config-patch-control-plane @cni.patch my-cluster https://192.168.64.15:6443
Note:
@cni.patch
means that the patch is read from a file namedcni.patch
.
Machine Configs as Templates
Individual machines may need different settings: for instance, each may have a different static IP address.
When different files are needed for machines of the same type, there are two supported flows:
- Use the
talosctl gen config
command to generate a template, and then patch the template for each machine withtalosctl machineconfig patch
. - Generate each machine configuration file separately with
talosctl gen config
while applying patches.
For example, given a machine configuration patch which sets the static machine hostname:
# worker1.patch
machine:
network:
hostname: worker1
Either of the following commands will generate a worker machine configuration file with the hostname set to worker1
:
$ talosctl gen config --with-secrets secrets.yaml my-cluster https://192.168.64.15:6443
created /Users/taloswork/controlplane.yaml
created /Users/taloswork/worker.yaml
created /Users/taloswork/talosconfig
$ talosctl machineconfig patch worker.yaml --patch @worker1.patch --output worker1.yaml
talosctl gen config --with-secrets secrets.yaml --config-patch-worker @worker1.patch --output-types worker -o worker1.yaml my-cluster https://192.168.64.15:6443
Apply Configuration
To apply the Machine Configs, you need to know the machines’ IP addresses.
Talos will print out the IP addresses of the machines on the console during the boot process:
[4.605369] [talos] task loadConfig (1/1): this machine is reachable at:
[4.607358] [talos] task loadConfig (1/1): 192.168.0.2
[4.608766] [talos] task loadConfig (1/1): server certificate fingerprint:
[4.611106] [talos] task loadConfig (1/1): xA9a1t2dMxB0NJ0qH1pDzilWbA3+DK/DjVbFaJBYheE=
[4.613822] [talos] task loadConfig (1/1):
[4.614985] [talos] task loadConfig (1/1): upload configuration using talosctl:
[4.616978] [talos] task loadConfig (1/1): talosctl apply-config --insecure --nodes 192.168.0.2 --file <config.yaml>
[4.620168] [talos] task loadConfig (1/1): or apply configuration using talosctl interactive installer:
[4.623046] [talos] task loadConfig (1/1): talosctl apply-config --insecure --nodes 192.168.0.2 --mode=interactive
[4.626365] [talos] task loadConfig (1/1): optionally with node fingerprint check:
[4.628692] [talos] task loadConfig (1/1): talosctl apply-config --insecure --nodes 192.168.0.2 --cert-fingerprint 'xA9a1t2dMxB0NJ0qH1pDzilWbA3+DK/DjVbFaJBYheE=' --file <config.yaml>
If you do not have console access, the IP address may also be discoverable from your DHCP server.
Once you have the IP address, you can then apply the correct configuration.
talosctl apply-config --insecure \
--nodes 192.168.0.2 \
--file controlplane.yaml
The insecure flag is necessary because the PKI infrastructure has not yet been made available to the node. Note: the connection will be encrypted, it is just unauthenticated. If you have console access you can extract the server certificate fingerprint and use it for an additional layer of validation:
talosctl apply-config --insecure \
--nodes 192.168.0.2 \
--cert-fingerprint xA9a1t2dMxB0NJ0qH1pDzilWbA3+DK/DjVbFaJBYheE= \
--file cp0.yaml
Using the fingerprint allows you to be sure you are sending the configuration to the correct machine, but it is completely optional. After the configuration is applied to a node, it will reboot. Repeat this process for each of the nodes in your cluster.
Understand talosctl, endpoints and nodes
It is important to understand the concept of endpoints
and nodes
.
In short: endpoints
are the nodes that talosctl
sends commands to, but nodes
are the nodes that the command operates on.
The endpoint will forward the command to the nodes, if needed.
Endpoints
Endpoints are the IP addresses to which the talosctl
client directly talks.
These should be the set of control plane nodes, either directly or through a load balancer.
Each endpoint will automatically proxy requests destined to another node in the cluster. This means that you only need access to the control plane nodes in order to access the rest of the network.
talosctl
will automatically load balance requests and fail over between all of your endpoints.
You can pass in --endpoints <IP Address1>,<IP Address2>
as a comma separated list of IP/DNS addresses to the current talosctl
command.
You can also set the endpoints
in your talosconfig
, by calling talosctl config endpoint <IP Address1> <IP Address2>
.
Note: these are space separated, not comma separated.
As an example, if the IP addresses of our control plane nodes are:
- 192.168.0.2
- 192.168.0.3
- 192.168.0.4
We would set those in the talosconfig
with:
talosctl --talosconfig=./talosconfig \
config endpoint 192.168.0.2 192.168.0.3 192.168.0.4
Nodes
The node is the target you wish to perform the API call on.
When specifying nodes, their IPs and/or hostnames are as seen by the endpoint servers, not as from the client. This is because all connections are proxied through the endpoints.
You may provide -n
or --nodes
to any talosctl
command to supply the node or (comma-separated) nodes on which you wish to perform the operation.
For example, to see the containers running on node 192.168.0.200:
talosctl -n 192.168.0.200 containers
To see the etcd logs on both nodes 192.168.0.10 and 192.168.0.11:
talosctl -n 192.168.0.10,192.168.0.11 logs etcd
It is possible to set a default set of nodes in the talosconfig
file, but our recommendation is to explicitly pass in the node or nodes to be operated on with each talosctl
command.
For a more in-depth discussion of Endpoints and Nodes, please see talosctl.
Default configuration file
You can reference which configuration file to use directly with the --talosconfig
parameter:
talosctl --talosconfig=./talosconfig \
--nodes 192.168.0.2 version
However, talosctl
comes with tooling to help you integrate and merge this configuration into the default talosctl
configuration file.
This is done with the merge
option.
talosctl config merge ./talosconfig
This will merge your new talosconfig
into the default configuration file ($XDG_CONFIG_HOME/talos/config.yaml
), creating it if necessary.
Like Kubernetes, the talosconfig
configuration files has multiple “contexts” which correspond to multiple clusters.
The <cluster-name>
you chose above will be used as the context name.
Kubernetes Bootstrap
Bootstrapping your Kubernetes cluster with Talos is as simple as:
talosctl bootstrap --nodes 192.168.0.2
The bootstrap operation should only be called ONCE and only on a SINGLE control plane node!
The IP can be any of your control planes (or the loadbalancer, if used for the Talos API endpoint).
At this point, Talos will form an etcd
cluster, generate all of the core Kubernetes assets, and start the Kubernetes control plane components.
After a few moments, you will be able to download your Kubernetes client configuration and get started:
talosctl kubeconfig
Running this command will add (merge) you new cluster into your local Kubernetes configuration.
If you would prefer the configuration to not be merged into your default Kubernetes configuration file, pass in a filename:
talosctl kubeconfig alternative-kubeconfig
You should now be able to connect to Kubernetes and see your nodes:
kubectl get nodes
And use talosctl to explore your cluster:
talosctl -n <NODEIP> dashboard
For a list of all the commands and operations that talosctl
provides, see the CLI reference.
4 - System Requirements
Minimum Requirements
Role | Memory | Cores | System Disk |
---|---|---|---|
Control Plane | 2 GiB | 2 | 10 GiB |
Worker | 1 GiB | 1 | 10 GiB |
Recommended
Role | Memory | Cores | System Disk |
---|---|---|---|
Control Plane | 4 GiB | 4 | 100 GiB |
Worker | 2 GiB | 2 | 100 GiB |
These requirements are similar to that of Kubernetes.
Storage
Talos Linux itself only requires less than 100 MB of disk space, but the EPHEMERAL partition is used to store pulled images, container work directories, and so on. Thus a minimum is 10 GiB of disk space is required. 100 GiB is desired. Note, however, that because Talos Linux assumes complete control of the disk it is installed on, so that it can control the partition table for image based upgrades, you cannot partition the rest of the disk for use by workloads.
Thus it is recommended to install Talos Linux on a small, dedicated disk - using a Terabyte sized SSD for the Talos install disk would be wasteful. Sidero Labs recommends having separate disks (apart from the Talos install disk) to be used for storage.
5 - What's New in Talos 1.3
See also upgrade notes for important changes.
Component Updates
- Kubernetes: v1.26.0
- Flannel: v0.20.2
- CoreDNS: v1.10.0
- etcd: v3.5.6
- Linux: 5.15.82
- containerd: v1.6.12
Talos is built with Go 1.19.4.
Kubernetes
kube-apiserver
Custom Audit Policy
Talos now supports setting custom audit policy for kube-apiserver
in the machine configuration.
cluster:
apiServer:
auditPolicy:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
etcd
Secrets Encryption with secretbox
algorithm
By default new clusters will use secretbox
for etcd secrets encryption instead of AES-CBC.
If both are configured then secretbox
will take precedence for new writes.
Old clusters may keep using AES-CBC.
To enable secretbox
you need to add an encryption secret at cluster.secretboxEncryptionSecret
after an upgrade to Talos 1.3.
You should keep aescbcEncryptionSecret
however, even if secretbox
is enabled, older data will still be encrypted with AES-CBC.
How to generate the secret for secretbox
:
dd if=/dev/random of=/dev/stdout bs=32 count=1 | base64
Node Labels
Talos now supports specifying node labels in the machine configuration:
machine:
nodeLabels:
rack: rack1a
zone: us-east-1a
Changes to the node labels will be applied immediately without restarting kubelet
.
Talos keeps track of the owned node labels in the talos.dev/owned-labels
annotation.
Static Pod Manifests
Talos by default (for new clusters) doesn’t configure kubelet
to watch /etc/kubernetes/manifests
directory for static pod manifests.
Talos-managed static pods are served via local HTTP server which prevents potential security vulnerabilities related to malicious static pods manifests
being placed to the aforementioned directory.
Static pods should always be configured in machine.pods
instead of using machine.files
to put files to /etc/kubernetes/manifests
directory.
To re-enable support for /etc/kubernetes/manifests
you may set machine.kubelet.disableManifestsDirectory
.
Example:
machine:
kubelet:
disableManifestsDirectory: no
etcd
etcd
Consistency Check
Talos enables –experimental-compact-hash-check-enabled option by default to improve
etcd
store consistency guarantees.
This options is only available with etcd
>= v3.5.5, so Talos doesn’t support versions of etcd
older than v3.5.5 (Talos 1.3.0 defaults to etcd
v3.5.6).
etcd
Member ID
Talos now internally handles etcd member removal by member ID instead of member name (hostname).
This resolves the case when member name is not accurate or empty (eg: when etcd
hasn’t fully joined yet).
Command talosctl etcd remove-member
now accepts member IDs instead of member names.
A new resource can be used to get member ID of the Talos node:
$ talosctl get etcdmember
NODE NAMESPACE TYPE ID VERSION MEMBER ID
10.150.0.4 etcd EtcdMember local 1 143fab7c7ccd2577
CRI (containerd)
CRI Configuration Overrides
Talos no longer supports CRI config overrides placed in /var/cri/conf.d
directory.
New way to add configuration overrides correctly handles merging of containerd/CRI plugin configuration.
Registry Mirrors
Talos had an inconsistency in the way registry mirror endpoints are handled when compared with containerd
implementation:
machine:
registries:
mirrors:
docker.io:
endpoints:
- "https://mirror-registry/v2/mirror.docker.io"
Talos would use endpoint https://mirror-registry/v2/mirror.docker.io
, while containerd
would use https://mirror-registry/v2/mirror.docker.io/v2
.
This inconsistency is now fixed, and Talos uses same endpoint as containerd
.
New overridePath
configuration is introduced to skip appending /v2
both on Talos and containerd
side:
machine:
registries:
mirrors:
docker.io:
endpoints:
- "https://mirror-registry/v2/mirror.docker.io"
overridePath: true
registry.k8s.io
Talos now uses registry.k8s.io
instead of k8s.gcr.io
for Kubernetes container images.
See Kubernetes documentation for additional details.
If using registry mirrors, or in air-gapped installations you may need to update your configuration.
Linux
cgroups v1
Talos always defaults to using cgroups v2
when Talos doesn’t run in a container (when running in a container
Talos follows the host cgroups
mode).
Talos can now be forced to use cgroups v1
by setting boot kernel argument talos.unified_cgroup_hierarchy=0
:
machine:
install:
extraKernelArgs:
- "talos.unified_cgroup_hierarchy=0"
Current cgroups
mode can be checked with talosctl ls /sys/fs/cgroup
:
cgroups v1
:
blkio
cpu
cpuacct
cpuset
devices
freezer
hugetlb
memory
net_cls
net_prio
perf_event
pids
cgroups v2
:
cgroup.controllers
cgroup.max.depth
cgroup.max.descendants
cgroup.procs
cgroup.stat
cgroup.subtree_control
cgroup.threads
cpu.stat
cpuset.cpus.effective
cpuset.mems.effective
init
io.stat
kubepods
memory.numa_stat
memory.stat
podruntime
system
Note:
cgroupsv1
is deprecated and it should be used only for compatibility with workloads which don’t supportcgroupsv2
yet.
Kernel Command Line ip=
Argument
Talos now supports referencing interface name via enxMAC
address notation in the ip=
argument:
ip=172.20.0.2::172.20.0.1:255.255.255.0::enx7085c2dfbc59
Talos correctly handles multiple ip=
arguments, and also enables forcing DHCP on a specific interface:
vlan=eth0.137:eth0 ip=eth0.137:dhcp
Kernel Module Parameters
Talos now supports settings kernel module parameters.
Example:
machine:
kernel:
modules:
- name: "br_netfilter"
parameters:
- nf_conntrack_max=131072
BTF Support
Talos Linux kernel now ships with BTF (BPF Type Format) support enabled:
$ talosctl -n 10.150.0.4 ls -l /sys/kernel/btf
NODE MODE UID GID SIZE(B) LASTMOD NAME
10.150.0.4 drwxr-xr-x 0 0 0 Dec 13 16:51:19 .
10.150.0.4 -r--r--r-- 0 0 11578002 Dec 13 16:51:19 vmlinux
This can be used to compile BPF programs against the kernel without kernel sources, or to load relocatable BPF programs.
Platform Support
Exocale Platform
Talos adds support for a new platform: Exoscale.
Exoscale provides a firewall, TCP load balancer and autoscale groups. It works well with CCM and Kubernetes node autoscaler.
Nano Pi R4S
Talos now supports the Nano Pi R4S SBC.
Raspberry Generic Images
The Raspberry Pi 4 specific image has been deprecated and will be removed in the v1.4 release of Talos. Talos now ships a generic Raspberry Pi image that should support more Raspberry Pi variants. Refer to the docs to find which ones are supported.
PlatformMetadata Resource
Talos now publishes information about the platform it is running on in the PlatformMetadata
resource:
# talosctl get platformmetadata -o yaml
spec:
platform: equinixMetal
hostname: ci-blue-worker-amd64-0
region: dc
zone: dc13
instanceType: c3.medium.x86
instanceId: efc0f667-XXX-XXX-XXXX-XXXXXXX
providerId: equinixmetal://efc0f667-XXX-XXX-XXXX-XXXXXXX
Networking
KubeSpan
KubeSpan MTU link size is now configurable via network.kubespan.mtu
setting in the machine configuration.
Default KubeSpan MTU assumes that the underlying network MTU is 1500 bytes, so if the underlying network MTU is different, KubeSpan MTU should be adjusted accordingly.
KubeSpan automatically publishes machine external (public) IP as a machine endpoint (as discovered by connecting to the discovery service), this allows establishing a connection to a machine behind NAT if the KubeSpan port 51820 is forwarded to the machine.
KubeSpan by default publishes all machine addresses as Wireguard endpoints and finds the set of endpoints that are reachable for each pair of machines.
A set of endpoints can be manually filtered via machine.network.kubespan.filters.endpoints
setting in the machine configuration.
Route MTU
Talos now supports setting MTU for a specific route.
talosctl
Action Tracking
Now action tracking for commands talosctl reboot
, talosctl shutdown
, talosctl reset
and talosctl upgrade
is enabled by default.
Previous behavior can be restored by setting --wait=false
flag.
talosctl machineconfig patch
A new subcommand, machineconfig patch
is added to talosctl
to allow patching of machine configuration.
It accepts a machineconfig file and a list of patches as input, and outputs the patched machine configuration.
Patches can be sourced from the command line or from a file. Output can be written to a file or to stdout.
Example:
talosctl machineconfig patch controlplane.yaml --patch '[{"op":"replace","path":"/cluster/clusterName","value":"patch1"}]' --patch @/path/to/patch2.json
Additionally, talosctl machineconfig gen
subcommand is introduced as an alias to talosctl gen config
.
talosctl gen config
The command talosctl gen config
now supports generating a single type of output (e.g. controlplane machine configuration) by specifying the --output-types
flag,
which is useful with pre-generated secrets bundle, e.g.:
$ talosctl gen secrets # this outputs secrets bundle to secrets.yaml
$ talosctl gen config mycluster https://mycluster:6443 --with-secrets secrets.yaml --output-types controlplane -o -
version: v1alpha1 # Indicates the schema used to decode the contents.
debug: false # Enable verbose logging to the console.
persist: true # Indicates whether to pull the machine config upon every boot.
# Provides machine specific configuration options.
machine:
...
talosctl get -o jsonpath
The command talosctl get
now supports jsonpath
output format:
$ talosctl -n 10.68.182.3 get address -o jsonpath='{.spec.address}
10.68.182.3/31
127.0.0.1/8
::1/128
192.168.11.128/32
Developer Experience
New Go Module Path
Talos now uses github.com/siderolabs/talos
and github.com/siderolabs/talos/pkg/machinery
as a Go module path.
6 - Support Matrix
Talos Version | 1.3 | 1.2 |
---|---|---|
Release Date | 2022-12-01 | 2022-09-01 (1.2.0) |
End of Community Support | 1.4.0 release (2023-03-15, TBD) | 1.3.0 release (2022-12-15) |
Enterprise Support | offered by Sidero Labs Inc. | offered by Sidero Labs Inc. |
Kubernetes | 1.26, 1.25, 1.24 | 1.25, 1.24, 1.23 |
Architecture | amd64, arm64 | amd64, arm64 |
Platforms | ||
- cloud | AWS, GCP, Azure, Digital Ocean, Exoscale, Hetzner, OpenStack, Oracle Cloud, Scaleway, Vultr, Upcloud | AWS, GCP, Azure, Digital Ocean, Hetzner, OpenStack, Oracle Cloud, Scaleway, Vultr, Upcloud |
- bare metal | x86: BIOS, UEFI; arm64: UEFI; boot: ISO, PXE, disk image | x86: BIOS, UEFI; arm64: UEFI; boot: ISO, PXE, disk image |
- virtualized | VMware, Hyper-V, KVM, Proxmox, Xen | VMware, Hyper-V, KVM, Proxmox, Xen |
- SBCs | Banana Pi M64, Jetson Nano, Libre Computer Board ALL-H3-CC, Nano Pi R4S, Pine64, Pine64 Rock64, Radxa ROCK Pi 4c, Raspberry Pi 4B, Raspberry Pi Compute Module 4 | Banana Pi M64, Jetson Nano, Libre Computer Board ALL-H3-CC, Pine64, Pine64 Rock64, Radxa ROCK Pi 4c, Raspberry Pi 4B, Raspberry Pi Compute Module 4 |
- local | Docker, QEMU | Docker, QEMU |
Cluster API | ||
CAPI Bootstrap Provider Talos | >= 0.5.6 | >= 0.5.5 |
CAPI Control Plane Provider Talos | >= 0.4.10 | >= 0.4.9 |
Sidero | >= 0.5.7 | >= 0.5.5 |
Platform Tiers
- Tier 1: Automated tests, high-priority fixes.
- Tier 2: Tested from time to time, medium-priority bugfixes.
- Tier 3: Not tested by core Talos team, community tested.
Tier 1
- Metal
- AWS
- GCP
Tier 2
- Azure
- Digital Ocean
- OpenStack
- VMWare
Tier 3
- Exoscale
- Hetzner
- nocloud
- Oracle Cloud
- Scaleway
- Vultr
- Upcloud