This is the multi-page printable view of this section. Click here to print.
Installation
- 1: Bare Metal Platforms
- 1.1: Equinix Metal
- 1.2: ISO
- 1.3: Matchbox
- 1.4: Network Configuration
- 1.5: PXE
- 1.6: SecureBoot
- 2: Virtualized Platforms
- 2.1: Hyper-V
- 2.2: KVM
- 2.3: OpenNebula
- 2.4: Proxmox
- 2.5: Vagrant & Libvirt
- 2.6: VMware
- 2.7: Xen
- 3: Cloud Platforms
- 3.1: Akamai
- 3.2: AWS
- 3.3: Azure
- 3.4: CloudStack
- 3.5: DigitalOcean
- 3.6: Exoscale
- 3.7: GCP
- 3.8: Hetzner
- 3.9: Kubernetes
- 3.10: Nocloud
- 3.11: OpenStack
- 3.12: Oracle
- 3.13: Scaleway
- 3.14: UpCloud
- 3.15: Vultr
- 4: Local Platforms
- 4.1: Docker
- 4.2: QEMU
- 4.3: VirtualBox
- 5: Single Board Computers
- 5.1: Banana Pi M64
- 5.2: Friendlyelec Nano PI R4S
- 5.3: Jetson Nano
- 5.4: Libre Computer Board ALL-H3-CC
- 5.5: Orange Pi R1 Plus LTS
- 5.6: Pine64
- 5.7: Pine64 Rock64
- 5.8: Radxa ROCK 4C Plus
- 5.9: Radxa ROCK PI 4
- 5.10: Radxa ROCK PI 4C
- 5.11: Raspberry Pi Series
- 5.12: Turing RK1
- 6: Boot Assets
- 7: Omni SaaS
- 8: talosctl
1 - Bare Metal Platforms
1.1 - Equinix Metal
You can create a Talos Linux cluster on Equinix Metal in a variety of ways, such as through the EM web UI, or the metal
command line tool.
Regardless of the method, the process is:
- Create a DNS entry for your Kubernetes endpoint.
- Generate the configurations using
talosctl
. - Provision your machines on Equinix Metal.
- Push the configurations to your servers (if not done as part of the machine provisioning).
- Configure your Kubernetes endpoint to point to the newly created control plane nodes.
- Bootstrap the cluster.
Define the Kubernetes Endpoint
There are a variety of ways to create an HA endpoint for the Kubernetes cluster. Some of the ways are:
- DNS
- Load Balancer
- BGP
Whatever way is chosen, it should result in an IP address/DNS name that routes traffic to all the control plane nodes. We do not know the control plane node IP addresses at this stage, but we should define the endpoint DNS entry so that we can use it in creating the cluster configuration. After the nodes are provisioned, we can use their addresses to create the endpoint A records, or bind them to the load balancer, etc.
Create the Machine Configuration Files
Generating Configurations
Using the DNS name of the loadbalancer defined above, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-k8s-em-tutorial https://<load balancer IP or DNS>:<port>
created controlplane.yaml
created worker.yaml
created talosconfig
The
port
used above should be 6443, unless your load balancer maps a different port to port 6443 on the control plane nodes.
Validate the Configuration Files
talosctl validate --config controlplane.yaml --mode metal
talosctl validate --config worker.yaml --mode metal
Note: Validation of the install disk could potentially fail as validation is performed on your local machine and the specified disk may not exist.
Passing in the configuration as User Data
You can use the metadata service provide by Equinix Metal to pass in the machines configuration. It is required to add a shebang to the top of the configuration file.
The convention we use is #!talos
.
Provision the machines in Equinix Metal
Talos Linux can be PXE-booted on Equinix Metal using Image Factory, using the equinixMetal
platform: e.g.
https://pxe.factory.talos.dev/pxe/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0/equinixMetal-amd64
(this URL references the default schematic and amd64
architecture).
Follow the Image Factory guide to create a custom schematic, e.g. with CPU microcode updates. The PXE boot URL can be used as the iPXE script URL.
Using the Equinix Metal UI
Simply select the location and type of machines in the Equinix Metal web interface.
Select ‘Custom iPXE’ as the Operating System and enter the Image Factory PXE URL as the iPXE script URL, then select the number of servers to create, and name them (in lowercase only.)
Under optional settings, you can optionally paste in the contents of controlplane.yaml
that was generated, above (ensuring you add a first line of #!talos
).
You can repeat this process to create machines of different types for control plane and worker nodes (although you would pass in worker.yaml
for the worker nodes, as user data).
If you did not pass in the machine configuration as User Data, you need to provide it to each machine, with the following command:
talosctl apply-config --insecure --nodes <Node IP> --file ./controlplane.yaml
Creating a Cluster via the Equinix Metal CLI
This guide assumes the user has a working API token,and the Equinix Metal CLI installed.
Note: Ensure you have prepended
#!talos
to thecontrolplane.yaml
file.
metal device create \
--project-id $PROJECT_ID \
--metro $METRO \
--operating-system "custom_ipxe" \
--ipxe-script-url "https://pxe.factory.talos.dev/pxe/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0/equinixMetal-amd64" \
--plan $PLAN \
--hostname $HOSTNAME \
--userdata-file controlplane.yaml
e.g. metal device create -p <projectID> -f da11 -O custom_ipxe -P c3.small.x86 -H steve.test.11 --userdata-file ./controlplane.yaml --ipxe-script-url "https://pxe.factory.talos.dev/pxe/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0/equinixMetal-amd64"
Repeat this to create each control plane node desired: there should usually be 3 for a HA cluster.
Update the Kubernetes endpoint
Now our control plane nodes have been created, and we know their IP addresses, we can associate them with the Kubernetes endpoint.
Configure your load balancer to route traffic to these nodes, or add A
records to your DNS entry for the endpoint, for each control plane node.
e.g.
host endpoint.mydomain.com
endpoint.mydomain.com has address 145.40.90.201
endpoint.mydomain.com has address 147.75.109.71
endpoint.mydomain.com has address 145.40.90.177
Bootstrap Etcd
Set the endpoints
and nodes
for talosctl
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
This only needs to be issued to one control plane node.
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
1.2 - ISO
Talos can be installed on bare-metal machine using an ISO image.
ISO images for amd64
and arm64
architectures are available on the Talos releases page.
Talos doesn’t install itself to disk when booted from an ISO until the machine configuration is applied.
Please follow the getting started guide for the generic steps on how to install Talos.
Note: If there is already a Talos installation on the disk, the machine will boot into that installation when booting from a Talos ISO. The boot order should prefer disk over ISO, or the ISO should be removed after the installation to make Talos boot from disk.
See kernel parameters reference for the list of kernel parameters supported by Talos.
There are two flavors of ISO images available:
metal-<arch>.iso
supports booting on BIOS and UEFI systems (for x86, UEFI only for arm64)metal-<arch>-secureboot.iso
supports booting on only UEFI systems in SecureBoot mode (via Image Factory)
1.3 - Matchbox
Creating a Cluster
In this guide we will create an HA Kubernetes cluster with 3 worker nodes. We assume an existing load balancer, matchbox deployment, and some familiarity with iPXE.
We leave it up to the user to decide if they would like to use static networking, or DHCP. The setup and configuration of DHCP will not be covered.
Create the Machine Configuration Files
Generating Base Configurations
Using the DNS name of the load balancer, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-k8s-metal-tutorial https://<load balancer IP or DNS>:<port>
created controlplane.yaml
created worker.yaml
created talosconfig
At this point, you can modify the generated configs to your liking.
Optionally, you can specify --config-patch
with RFC6902 jsonpatch which will be applied during the config generation.
Validate the Configuration Files
$ talosctl validate --config controlplane.yaml --mode metal
controlplane.yaml is valid for metal mode
$ talosctl validate --config worker.yaml --mode metal
worker.yaml is valid for metal mode
Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (talos.config
) must be used to inform Talos about where it should retrieve its configuration file.
To keep things simple we will place controlplane.yaml
, and worker.yaml
into Matchbox’s assets
directory.
This directory is automatically served by Matchbox.
Create the Matchbox Configuration Files
The profiles we will create will reference vmlinuz
, and initramfs.xz
.
Download these files from the release of your choice, and place them in /var/lib/matchbox/assets
.
Profiles
Control Plane Nodes
{
"id": "control-plane",
"name": "control-plane",
"boot": {
"kernel": "/assets/vmlinuz",
"initrd": ["/assets/initramfs.xz"],
"args": [
"initrd=initramfs.xz",
"init_on_alloc=1",
"slab_nomerge",
"pti=on",
"console=tty0",
"printk.devkmsg=on",
"talos.platform=metal",
"talos.config=http://matchbox.talos.dev/assets/controlplane.yaml"
]
}
}
Note: Be sure to change
http://matchbox.talos.dev
to the endpoint of your matchbox server.
Worker Nodes
{
"id": "default",
"name": "default",
"boot": {
"kernel": "/assets/vmlinuz",
"initrd": ["/assets/initramfs.xz"],
"args": [
"initrd=initramfs.xz",
"init_on_alloc=1",
"slab_nomerge",
"pti=on",
"console=tty0",
"printk.devkmsg=on",
"talos.platform=metal",
"talos.config=http://matchbox.talos.dev/assets/worker.yaml"
]
}
}
Groups
Now, create the following groups, and ensure that the selector
s are accurate for your specific setup.
{
"id": "control-plane-1",
"name": "control-plane-1",
"profile": "control-plane",
"selector": {
...
}
}
{
"id": "control-plane-2",
"name": "control-plane-2",
"profile": "control-plane",
"selector": {
...
}
}
{
"id": "control-plane-3",
"name": "control-plane-3",
"profile": "control-plane",
"selector": {
...
}
}
{
"id": "default",
"name": "default",
"profile": "default"
}
Boot the Machines
Now that we have our configuration files in place, boot all the machines. Talos will come up on each machine, grab its configuration file, and bootstrap itself.
Bootstrap Etcd
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
1.4 - Network Configuration
By default, Talos will run DHCP client on all interfaces which have a link, and that might be enough for most of the cases. If some advanced network configuration is required, it can be done via the machine configuration file.
But sometimes it is required to apply network configuration even before the machine configuration can be fetched from the network.
Kernel Command Line
Talos supports some kernel command line parameters to configure network before the machine configuration is fetched.
Note: Kernel command line parameters are not persisted after Talos installation, so proper network configuration should be done via the machine configuration.
Address, default gateway and DNS servers can be configured via ip=
kernel command line parameter:
ip=172.20.0.2::172.20.0.1:255.255.255.0::eth0.100:::::
Bonding can be configured via bond=
kernel command line parameter:
bond=bond0:eth0,eth1:balance-rr
VLANs can be configured via vlan=
kernel command line parameter:
vlan=eth0.100:eth0
See kernel parameters reference for more details.
Platform Network Configuration
Some platforms (e.g. AWS, Google Cloud, etc.) have their own network configuration mechanisms, which can be used to perform the initial network configuration.
There is no such mechanism for bare-metal platforms, so Talos provides a way to use platform network config on the metal
platform to submit the initial network configuration.
The platform network configuration is a YAML document which contains resource specifications for various network resources.
For the metal
platform, the interactive dashboard can be used to edit the platform network configuration, also the configuration can be
created manually.
The current value of the platform network configuration can be retrieved using the MetaKeys
resource (key 0x0a
):
talosctl get meta 0x0a
The platform network configuration can be updated using the talosctl meta
command for the running node:
talosctl meta write 0x0a '{"externalIPs": ["1.2.3.4"]}'
talosctl meta delete 0x0a
The initial platform network configuration for the metal
platform can be also included into the generated Talos image:
docker run --rm -i ghcr.io/siderolabs/imager:v1.10.0-alpha.0 iso --arch amd64 --tar-to-stdout --meta 0x0a='{...}' | tar xz
docker run --rm -i --privileged ghcr.io/siderolabs/imager:v1.10.0-alpha.0 image --platform metal --arch amd64 --tar-to-stdout --meta 0x0a='{...}' | tar xz
The platform network configuration gets merged with other sources of network configuration, the details can be found in the network resources guide.
nocloud
Network Configuration
Some bare-metal providers provide a way to configure network via the nocloud
data source.
Talos Linux can automatically pick up this configuration when nocloud
image is used.
1.5 - PXE
Talos can be installed on bare-metal using PXE service. There are more detailed guides for PXE booting using Matchbox.
This guide describes generic steps for PXE booting Talos on bare-metal.
First, download the vmlinuz
and initramfs
assets from the Talos releases page.
Set up the machines to PXE boot from the network (usually by setting the boot order in the BIOS).
There might be options specific to the hardware being used, booting in BIOS or UEFI mode, using iPXE, etc.
Talos requires the following kernel parameters to be set on the initial boot:
talos.platform=metal
slab_nomerge
pti=on
When booted from the network without machine configuration, Talos will start in maintenance mode.
Please follow the getting started guide for the generic steps on how to install Talos.
See kernel parameters reference for the list of kernel parameters supported by Talos.
Note: If there is already a Talos installation on the disk, the machine will boot into that installation when booting from network. The boot order should prefer disk over network.
Talos can automatically fetch the machine configuration from the network on the initial boot using talos.config
kernel parameter.
A metadata service (HTTP service) can be implemented to deliver customized configuration to each node for example by using the MAC address of the node:
talos.config=https://metadata.service/talos/config?mac=${mac}
Note: The
talos.config
kernel parameter supports other substitution variables, see kernel parameters reference for the full list.
PXE booting can be also performed via Image Factory.
1.6 - SecureBoot
Talos now supports booting on UEFI systems in SecureBoot mode. When combined with TPM-based disk encryption, this provides Trusted Boot experience.
Note: SecureBoot is not supported on x86 platforms in BIOS mode.
The implementation is using systemd-boot as a boot menu implementation, while the
Talos kernel, initramfs and cmdline arguments are combined into the Unified Kernel Image (UKI) format.
UEFI firmware loads the systemd-boot
bootloader, which then loads the UKI image.
Both systemd-boot
and Talos UKI
image are signed with the key, which is enrolled into the UEFI firmware.
As Talos Linux is fully contained in the UKI image, the full operating system is verified and booted by the UEFI firmware.
Note: There is no support at the moment to upgrade non-UKI (GRUB-based) Talos installation to use UKI/SecureBoot, so a fresh installation is required.
SecureBoot with Sidero Labs Images
Sidero Labs provides Talos images signed with the Sidero Labs SecureBoot key via Image Factory.
Note: The SecureBoot images are available for Talos releases starting from
v1.5.0
.
The easiest way to get started with SecureBoot is to download the ISO, and boot it on a UEFI-enabled system which has SecureBoot enabled in setup mode.
The ISO bootloader will enroll the keys in the UEFI firmware, and boot the Talos Linux in SecureBoot mode.
The install should performed using SecureBoot installer (put it Talos machine configuration): factory.talos.dev/installer-secureboot/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba:v1.10.0-alpha.0
.
Note: SecureBoot images can also be generated with custom keys.
Booting Talos Linux in SecureBoot Mode
In this guide we will use the ISO image to boot Talos Linux in SecureBoot mode, followed by submitting machine configuration to the machine in maintenance mode. We will use one the ways to generate and submit machine configuration to the node, please refer to the Production Notes for the full guide.
First, make sure SecureBoot is enabled in the UEFI firmware.
For the first boot, the UEFI firmware should be in the setup mode, so that the keys can be enrolled into the UEFI firmware automatically.
If the UEFI firmware does not support automatic enrollment, you may need to hit Esc to force the boot menu to appear, and select the Enroll Secure Boot keys: auto
option.
Note: There are other ways to enroll the keys into the UEFI firmware, but this is out of scope of this guide.
Once Talos is running in maintenance mode, verify that secure boot is enabled:
$ talosctl -n <IP> get securitystate --insecure
NODE NAMESPACE TYPE ID VERSION SECUREBOOT
runtime SecurityState securitystate 1 true
Now we will generate the machine configuration for the node supplying the installer-secureboot
container image, and applying the patch to enable TPM-based disk encryption (requires TPM 2.0):
# tpm-disk-encryption.yaml
machine:
systemDiskEncryption:
ephemeral:
provider: luks2
keys:
- slot: 0
tpm: {}
state:
provider: luks2
keys:
- slot: 0
tpm: {}
Generate machine configuration:
talosctl gen config <cluster-name> https://<endpoint>:6443 --install-image=factory.talos.dev/installer-secureboot/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba:v1.10.0-alpha.0 --install-disk=/dev/sda --config-patch @tpm-disk-encryption.yaml
Apply machine configuration to the node:
talosctl -n <IP> apply-config --insecure -f controlplane.yaml
Talos will perform the installation to the disk and reboot the node. Please make sure that the ISO image is not attached to the node anymore, otherwise the node will boot from the ISO image again.
Once the node is rebooted, verify that the node is running in secure boot mode:
talosctl -n <IP> --talosconfig=talosconfig get securitystate
Upgrading Talos Linux
Any change to the boot asset (kernel, initramfs, kernel command line) requires the UKI to be regenerated and the installer image to be rebuilt.
Follow the steps above to generate new installer image updating the boot assets: use new Talos version, add a system extension, or modify the kernel command line.
Once the new installer
image is pushed to the registry, upgrade the node using the new installer image.
It is important to preserve the UKI signing key and the PCR signing key, otherwise the node will not be able to boot with the new UKI and unlock the encrypted partitions.
Disk Encryption with TPM
When encrypting the disk partition for the first time, Talos Linux generates a random disk encryption key and seals (encrypts) it with the TPM device. The TPM unlock policy is configured to trust the expected policy signed by the PCR signing key. This way TPM unlocking doesn’t depend on the exact PCR measurements, but rather on the expected policy signed by the PCR signing key and the state of SecureBoot (PCR 7 measurement, including secureboot status and the list of enrolled keys).
When the UKI image is generated, the UKI is measured and expected measurements are combined into TPM unlock policy and signed with the PCR signing key.
During the boot process, systemd-stub
component of the UKI performs measurements of the UKI sections into the TPM device.
Talos Linux during the boot appends to the PCR register the measurements of the boot phases, and once the boot reaches the point of mounting the encrypted disk partition,
the expected signed policy from the UKI is matched against measured values to unlock the TPM, and TPM unseals the disk encryption key which is then used to unlock the disk partition.
During the upgrade, as long as the new UKI is contains PCR policy signed with the same PCR signing key, and SecureBoot state has not changed the disk partition will be unlocked successfully.
Disk encryption is also tied to the state of PCR register 7, so that it unlocks only if SecureBoot is enabled and the set of enrolled keys hasn’t changed.
Other Boot Options
Unified Kernel Image (UKI) is a UEFI-bootable image which can be booted directly from the UEFI firmware skipping the systemd-boot
bootloader.
In network boot mode, the UKI can be used directly as well, as it contains the full set of boot assets required to boot Talos Linux.
When SecureBoot is enabled, the UKI image ignores any kernel command line arguments passed to it, but rather uses the kernel command line arguments embedded into the UKI image itself. If kernel command line arguments need to be changed, the UKI image needs to be rebuilt with the new kernel command line arguments.
SecureBoot with Custom Keys
Generating the Keys
Talos requires two set of keys to be used for the SecureBoot process:
- SecureBoot key is used to sign the boot assets and it is enrolled into the UEFI firmware.
- PCR Signing Key is used to sign the TPM policy, which is used to seal the disk encryption key.
The same key might be used for both, but it is recommended to use separate keys for each purpose.
Talos provides a utility to generate the keys, but existing PKI infrastructure can be used as well:
$ talosctl gen secureboot uki --common-name "SecureBoot Key"
writing _out/uki-signing-cert.pem
writing _out/uki-signing-cert.der
writing _out/uki-signing-key.pem
The generated certificate and private key are written to disk in PEM-encoded format (RSA 4096-bit key). The certificate is also written in DER format for the systems which expect the certificate in DER format.
PCR signing key can be generated with:
$ talosctl gen secureboot pcr
writing _out/pcr-signing-key.pem
The file containing the private key is written to disk in PEM-encoded format (RSA 2048-bit key).
Optionally, UEFI automatic key enrollment database can be generated using the _out/uki-signing-*
files as input:
$ talosctl gen secureboot database
writing _out/db.auth
writing _out/KEK.auth
writing _out/PK.auth
These files can be used to enroll the keys into the UEFI firmware automatically when booting from a SecureBoot ISO while UEFI firmware is in the setup mode.
Generating the SecureBoot Assets
Once the keys are generated, they can be used to sign the Talos boot assets to generate required ISO images, PXE boot assets, disk images, installer containers, etc. In this guide we will generate a SecureBoot ISO image and an installer image.
$ docker run --rm -t -v $PWD/_out:/secureboot:ro -v $PWD/_out:/out ghcr.io/siderolabs/imager:v1.10.0-alpha.0 secureboot-iso
profile ready:
arch: amd64
platform: metal
secureboot: true
version: v1.10.0-alpha.0
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
sdStub:
path: /usr/install/amd64/systemd-stub.efi
sdBoot:
path: /usr/install/amd64/systemd-boot.efi
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.5.0-alpha.3-35-ge0f383598-dirty
secureboot:
signingKeyPath: /secureboot/uki-signing-key.pem
signingCertPath: /secureboot/uki-signing-cert.pem
pcrSigningKeyPath: /secureboot/pcr-signing-key.pem
pcrPublicKeyPath: /secureboot/pcr-signing-public-key.pem
platformKeyPath: /secureboot/PK.auth
keyExchangeKeyPath: /secureboot/KEK.auth
signatureKeyPath: /secureboot/db.auth
output:
kind: iso
outFormat: raw
skipped initramfs rebuild (no system extensions)
kernel command line: talos.platform=metal console=tty0 init_on_alloc=1 slab_nomerge pti=on consoleblank=0 nvme_core.io_timeout=4294967295 printk.devkmsg=on ima_template=ima-ng ima_appraise=fix ima_hash=sha512 lockdown=confidentiality
UKI ready
ISO ready
output asset path: /out/metal-amd64-secureboot.iso
Next, the installer image should be generated to install Talos to disk on a SecureBoot-enabled system:
$ docker run --rm -t -v $PWD/_out:/secureboot:ro -v $PWD/_out:/out ghcr.io/siderolabs/imager:v1.10.0-alpha.0 secureboot-installer
profile ready:
arch: amd64
platform: metal
secureboot: true
version: v1.10.0-alpha.0
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
sdStub:
path: /usr/install/amd64/systemd-stub.efi
sdBoot:
path: /usr/install/amd64/systemd-boot.efi
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.10.0-alpha.0
secureboot:
signingKeyPath: /secureboot/uki-signing-key.pem
signingCertPath: /secureboot/uki-signing-cert.pem
pcrSigningKeyPath: /secureboot/pcr-signing-key.pem
pcrPublicKeyPath: /secureboot/pcr-signing-public-key.pem
platformKeyPath: /secureboot/PK.auth
keyExchangeKeyPath: /secureboot/KEK.auth
signatureKeyPath: /secureboot/db.auth
output:
kind: installer
outFormat: raw
skipped initramfs rebuild (no system extensions)
kernel command line: talos.platform=metal console=tty0 init_on_alloc=1 slab_nomerge pti=on consoleblank=0 nvme_core.io_timeout=4294967295 printk.devkmsg=on ima_template=ima-ng ima_appraise=fix ima_hash=sha512 lockdown=confidentiality
UKI ready
installer container image ready
output asset path: /out/installer-amd64-secureboot.tar
The generated container image should be pushed to some container registry which Talos can access during the installation, e.g.:
crane push _out/installer-amd64-secureboot.tar ghcr.io/<user>/installer-amd64-secureboot:v1.10.0-alpha.0
The generated ISO and installer images might be further customized with system extensions, extra kernel command line arguments, etc.
2 - Virtualized Platforms
2.1 - Hyper-V
Pre-requisities
- Download the latest
metal-amd64.iso
ISO from github releases page - Create a New-TalosVM folder in any of your PS Module Path folders
$env:PSModulePath -split ';'
and save the New-TalosVM.psm1 there
Plan Overview
Here we will create a basic 3 node cluster with a single control-plane node and two worker nodes. The only difference between control plane and worker node is the amount of RAM and an additional storage VHD. This is personal preference and can be configured to your liking.
We are using a VMNamePrefix
argument for a VM Name prefix and not the full hostname.
This command will find any existing VM with that prefix and “+1” the highest suffix it finds.
For example, if VMs talos-cp01
and talos-cp02
exist, this will create VMs starting from talos-cp03
, depending on NumberOfVMs argument.
Setup a Control Plane Node
Use the following command to create a single control plane node:
New-TalosVM -VMNamePrefix talos-cp -CPUCount 2 -StartupMemory 4GB -SwitchName LAB -TalosISOPath C:\ISO\metal-amd64.iso -NumberOfVMs 1 -VMDestinationBasePath 'D:\Virtual Machines\Test VMs\Talos'
This will create talos-cp01
VM and power it on.
Setup Worker Nodes
Use the following command to create 2 worker nodes:
New-TalosVM -VMNamePrefix talos-worker -CPUCount 4 -StartupMemory 8GB -SwitchName LAB -TalosISOPath C:\ISO\metal-amd64.iso -NumberOfVMs 2 -VMDestinationBasePath 'D:\Virtual Machines\Test VMs\Talos' -StorageVHDSize 50GB
This will create two VMs: talos-worker01
and talos-wworker02
and attach an additional VHD of 50GB for storage (which in my case will be passed to Mayastor).
Pushing Config to the Nodes
Now that our VMs are ready, find their IP addresses from console of VM. With that information, push config to the control plane node with:
# set control plane IP variable
$CONTROL_PLANE_IP='10.10.10.x'
# Generate talos config
talosctl gen config talos-cluster https://$($CONTROL_PLANE_IP):6443 --output-dir .
# Apply config to control plane node
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file .\controlplane.yaml
Pushing Config to Worker Nodes
Similarly, for the workers:
talosctl apply-config --insecure --nodes 10.10.10.x --file .\worker.yaml
Apply the config to both nodes.
Bootstrap Cluster
Now that our nodes are ready, we are ready to bootstrap the Kubernetes cluster.
# Use following command to set node and endpoint permanantly in config so you dont have to type it everytime
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP
# Bootstrap cluster
talosctl bootstrap
# Generate kubeconfig
talosctl kubeconfig .
This will generate the kubeconfig
file, you can use to connect to the cluster.
2.2 - KVM
Talos is known to work on KVM.
We don’t yet have a documented guide specific to KVM; however, you can have a look at our Vagrant & Libvirt guide which uses KVM for virtualization.
If you run into any issues, our community can probably help!
2.3 - OpenNebula
Talos is known to work on OpenNebula.
2.4 - Proxmox
In this guide we will create a Kubernetes cluster using Proxmox.
Video Walkthrough
To see a live demo of this writeup, visit Youtube here:
Installation
How to Get Proxmox
It is assumed that you have already installed Proxmox onto the server you wish to create Talos VMs on. Visit the Proxmox downloads page if necessary.
Install talosctl
You can download talosctl
an MacOS and Linux via:
brew install siderolabs/tap/talosctl
For manually installation and other platform please see the talosctl installation guide.
Download ISO Image
In order to install Talos in Proxmox, you will need the ISO image from Image Factory..
mkdir -p _out/
curl https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/<version>/metal-<arch>.iso -L -o _out/metal-<arch>.iso
For example version v1.10.0-alpha.0
for linux
platform:
mkdir -p _out/
curl https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0/metal-amd64.iso -L -o _out/metal-amd64.iso
QEMU guest agent support (iso)
- If you need the QEMU guest agent so you can do guest VM shutdowns of your Talos VMs, then you will need a custom ISO
- To get this, navigate to https://factory.talos.dev/
- Scroll down and select your Talos version (
v1.10.0-alpha.0
for example) - Then tick the box for
siderolabs/qemu-guest-agent
and submit - This will provide you with a link to the bare metal ISO
- The lines we’re interested in are as follows
Metal ISO
amd64 ISO
https://factory.talos.dev/image/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515/v1.10.0-alpha.0/metal-amd64.iso
arm64 ISO
https://factory.talos.dev/image/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515/v1.10.0-alpha.0/metal-arm64.iso
Installer Image
For the initial Talos install or upgrade use the following installer image:
factory.talos.dev/installer/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515:v1.10.0-alpha.0
- Download the above ISO (this will most likely be
amd64
for you) - Take note of the
factory.talos.dev/installer
URL as you’ll need it later
Upload ISO
From the Proxmox UI, select the “local” storage and enter the “Content” section. Click the “Upload” button:
Select the ISO you downloaded previously, then hit “Upload”
Create VMs
Before starting, familiarise yourself with the system requirements for Talos and assign VM resources accordingly.
Create a new VM by clicking the “Create VM” button in the Proxmox UI:
Fill out a name for the new VM:
In the OS tab, select the ISO we uploaded earlier:
Keep the defaults set in the “System” tab.
Keep the defaults in the “Hard Disk” tab as well, only changing the size if desired.
In the “CPU” section, give at least 2 cores to the VM:
Note: As of Talos v1.0 (which requires the x86-64-v2 microarchitecture), prior to Proxmox V8.0, booting with the default Processor Type
kvm64
will not work. You can enable the required CPU features after creating the VM by adding the following line in the corresponding/etc/pve/qemu-server/<vmid>.conf
file:args: -cpu kvm64,+cx16,+lahf_lm,+popcnt,+sse3,+ssse3,+sse4.1,+sse4.2
Alternatively, you can set the Processor Type to
host
if your Proxmox host supports these CPU features, this however prevents using live VM migration.
Verify that the RAM is set to at least 2GB:
Keep the default values for networking, verifying that the VM is set to come up on the bridge interface:
Finish creating the VM by clicking through the “Confirm” tab and then “Finish”.
Repeat this process for a second VM to use as a worker node. You can also repeat this for additional nodes desired.
Note: Talos doesn’t support memory hot plugging, if creating the VM programmatically don’t enable memory hotplug on your Talos VM’s. Doing so will cause Talos to be unable to see all available memory and have insufficient memory to complete installation of the cluster.
Start Control Plane Node
Once the VMs have been created and updated, start the VM that will be the first control plane node. This VM will boot the ISO image specified earlier and enter “maintenance mode”.
With DHCP server
Once the machine has entered maintenance mode, there will be a console log that details the IP address that the node received.
Take note of this IP address, which will be referred to as $CONTROL_PLANE_IP
for the rest of this guide.
If you wish to export this IP as a bash variable, simply issue a command like export CONTROL_PLANE_IP=1.2.3.4
.
Without DHCP server
To apply the machine configurations in maintenance mode, VM has to have IP on the network. So you can set it on boot time manually.
Press e
on the boot time.
And set the IP parameters for the VM.
Format is:
ip=<client-ip>:<srv-ip>:<gw-ip>:<netmask>:<host>:<device>:<autoconf>
For example $CONTROL_PLANE_IP will be 192.168.0.100 and gateway 192.168.0.1
linux /boot/vmlinuz init_on_alloc=1 slab_nomerge pti=on panic=0 consoleblank=0 printk.devkmsg=on earlyprintk=ttyS0 console=tty0 console=ttyS0 talos.platform=metal ip=192.168.0.100::192.168.0.1:255.255.255.0::eth0:off
Then press Ctrl-x or F10
Generate Machine Configurations
With the IP address above, you can now generate the machine configurations to use for installing Talos and Kubernetes. Issue the following command, updating the output directory, cluster name, and control plane IP as you see fit:
talosctl gen config talos-proxmox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out
This will create several files in the _out
directory: controlplane.yaml
, worker.yaml
, and talosconfig
.
Note: The Talos config by default will install to
/dev/sda
. Depending on your setup the virtual disk may be mounted differently Eg:/dev/vda
. You can check for disks running the following command:talosctl disks --insecure --nodes $CONTROL_PLANE_IP
Update
controlplane.yaml
andworker.yaml
config files to point to the correct disk location.
QEMU guest agent support
For QEMU guest agent support, you can generate the config with the custom install image:
talosctl gen config talos-proxmox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out --install-image factory.talos.dev/installer/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515:v1.10.0-alpha.0
- In Proxmox, go to your VM –> Options and ensure that
QEMU Guest Agent
isEnabled
- The QEMU agent is now configured
Create Control Plane Node
Using the controlplane.yaml
generated above, you can now apply this config using talosctl.
Issue:
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file _out/controlplane.yaml
You should now see some action in the Proxmox console for this VM. Talos will be installed to disk, the VM will reboot, and then Talos will configure the Kubernetes control plane on this VM.
Note: This process can be repeated multiple times to create an HA control plane.
Create Worker Node
Create at least a single worker node using a process similar to the control plane creation above.
Start the worker node VM and wait for it to enter “maintenance mode”.
Take note of the worker node’s IP address, which will be referred to as $WORKER_IP
Issue:
talosctl apply-config --insecure --nodes $WORKER_IP --file _out/worker.yaml
Note: This process can be repeated multiple times to add additional workers.
Using the Cluster
Once the cluster is available, you can make use of talosctl
and kubectl
to interact with the cluster.
For example, to view current running containers, run talosctl containers
for a list of containers in the system
namespace, or talosctl containers -k
for the k8s.io
namespace.
To view the logs of a container, use talosctl logs <container>
or talosctl logs -k <container>
.
First, configure talosctl to talk to your control plane node by issuing the following, updating paths and IPs as necessary:
export TALOSCONFIG="_out/talosconfig"
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP
Bootstrap Etcd
talosctl bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl kubeconfig .
Cleaning Up
To cleanup, simply stop and delete the virtual machines from the Proxmox UI.
2.5 - Vagrant & Libvirt
Pre-requisities
- Linux OS
- Vagrant installed
- vagrant-libvirt plugin installed
- talosctl installed
- kubectl installed
Overview
We will use Vagrant and its libvirt plugin to create a KVM-based cluster with 3 control plane nodes and 1 worker node.
For this, we will mount Talos ISO into the VMs using a virtual CD-ROM, and configure the VMs to attempt to boot from the disk first with the fallback to the CD-ROM.
We will also configure a virtual IP address on Talos to achieve high-availability on kube-apiserver.
Preparing the environment
First, we download the latest metal-amd64.iso
ISO from GitHub releases into the /tmp
directory.
wget --timestamping curl https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0/metal-amd64.iso -O /tmp/metal-amd64.iso
Create a Vagrantfile
with the following contents:
Vagrant.configure("2") do |config|
config.vm.define "control-plane-node-1" do |vm|
vm.vm.provider :libvirt do |domain|
domain.cpus = 2
domain.memory = 2048
domain.serial :type => "file", :source => {:path => "/tmp/control-plane-node-1.log"}
domain.storage :file, :device => :cdrom, :path => "/tmp/metal-amd64.iso"
domain.storage :file, :size => '4G', :type => 'raw'
domain.boot 'hd'
domain.boot 'cdrom'
end
end
config.vm.define "control-plane-node-2" do |vm|
vm.vm.provider :libvirt do |domain|
domain.cpus = 2
domain.memory = 2048
domain.serial :type => "file", :source => {:path => "/tmp/control-plane-node-2.log"}
domain.storage :file, :device => :cdrom, :path => "/tmp/metal-amd64.iso"
domain.storage :file, :size => '4G', :type => 'raw'
domain.boot 'hd'
domain.boot 'cdrom'
end
end
config.vm.define "control-plane-node-3" do |vm|
vm.vm.provider :libvirt do |domain|
domain.cpus = 2
domain.memory = 2048
domain.serial :type => "file", :source => {:path => "/tmp/control-plane-node-3.log"}
domain.storage :file, :device => :cdrom, :path => "/tmp/metal-amd64.iso"
domain.storage :file, :size => '4G', :type => 'raw'
domain.boot 'hd'
domain.boot 'cdrom'
end
end
config.vm.define "worker-node-1" do |vm|
vm.vm.provider :libvirt do |domain|
domain.cpus = 1
domain.memory = 1024
domain.serial :type => "file", :source => {:path => "/tmp/worker-node-1.log"}
domain.storage :file, :device => :cdrom, :path => "/tmp/metal-amd64.iso"
domain.storage :file, :size => '4G', :type => 'raw'
domain.boot 'hd'
domain.boot 'cdrom'
end
end
end
Bring up the nodes
Check the status of vagrant VMs:
vagrant status
You should see the VMs in “not created” state:
Current machine states:
control-plane-node-1 not created (libvirt)
control-plane-node-2 not created (libvirt)
control-plane-node-3 not created (libvirt)
worker-node-1 not created (libvirt)
Bring up the vagrant environment:
vagrant up --provider=libvirt
Check the status again:
vagrant status
Now you should see the VMs in “running” state:
Current machine states:
control-plane-node-1 running (libvirt)
control-plane-node-2 running (libvirt)
control-plane-node-3 running (libvirt)
worker-node-1 running (libvirt)
Find out the IP addresses assigned by the libvirt DHCP by running:
virsh list | grep vagrant | awk '{print $2}' | xargs -t -L1 virsh domifaddr
Output will look like the following:
virsh domifaddr vagrant_control-plane-node-2
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet0 52:54:00:f9:10:e5 ipv4 192.168.121.119/24
virsh domifaddr vagrant_control-plane-node-1
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet1 52:54:00:0f:ae:59 ipv4 192.168.121.203/24
virsh domifaddr vagrant_worker-node-1
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet2 52:54:00:6f:28:95 ipv4 192.168.121.69/24
virsh domifaddr vagrant_control-plane-node-3
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet3 52:54:00:03:45:10 ipv4 192.168.121.125/24
Our control plane nodes have the IPs: 192.168.121.203
, 192.168.121.119
, 192.168.121.125
and the worker node has the IP 192.168.121.69
.
Now you should be able to interact with Talos nodes that are in maintenance mode:
talosctl -n 192.168.121.203 disks --insecure
Sample output:
DEV MODEL SERIAL TYPE UUID WWID MODALIAS NAME SIZE BUS_PATH
/dev/vda - - HDD - - virtio:d00000002v00001AF4 - 8.6 GB /pci0000:00/0000:00:03.0/virtio0/
Installing Talos
Pick an endpoint IP in the vagrant-libvirt
subnet but not used by any nodes, for example 192.168.121.100
.
Generate a machine configuration:
talosctl gen config my-cluster https://192.168.121.100:6443 --install-disk /dev/vda
Edit controlplane.yaml
to add the virtual IP you picked to a network interface under .machine.network.interfaces
, for example:
machine:
network:
interfaces:
- interface: eth0
dhcp: true
vip:
ip: 192.168.121.100
Apply the configuration to the initial control plane node:
talosctl -n 192.168.121.203 apply-config --insecure --file controlplane.yaml
You can tail the logs of the node:
sudo tail -f /tmp/control-plane-node-1.log
Set up your shell to use the generated talosconfig and configure its endpoints (use the IPs of the control plane nodes):
export TALOSCONFIG=$(realpath ./talosconfig)
talosctl config endpoint 192.168.121.203 192.168.121.119 192.168.121.125
Bootstrap the Kubernetes cluster from the initial control plane node:
talosctl -n 192.168.121.203 bootstrap
Finally, apply the machine configurations to the remaining nodes:
talosctl -n 192.168.121.119 apply-config --insecure --file controlplane.yaml
talosctl -n 192.168.121.125 apply-config --insecure --file controlplane.yaml
talosctl -n 192.168.121.69 apply-config --insecure --file worker.yaml
After a while, you should see that all the members have joined:
talosctl -n 192.168.121.203 get members
The output will be like the following:
NODE NAMESPACE TYPE ID VERSION HOSTNAME MACHINE TYPE OS ADDRESSES
192.168.121.203 cluster Member talos-192-168-121-119 1 talos-192-168-121-119 controlplane Talos (v1.1.0) ["192.168.121.119"]
192.168.121.203 cluster Member talos-192-168-121-69 1 talos-192-168-121-69 worker Talos (v1.1.0) ["192.168.121.69"]
192.168.121.203 cluster Member talos-192-168-121-203 6 talos-192-168-121-203 controlplane Talos (v1.1.0) ["192.168.121.100","192.168.121.203"]
192.168.121.203 cluster Member talos-192-168-121-125 1 talos-192-168-121-125 controlplane Talos (v1.1.0) ["192.168.121.125"]
Interacting with Kubernetes cluster
Retrieve the kubeconfig from the cluster:
talosctl -n 192.168.121.203 kubeconfig ./kubeconfig
List the nodes in the cluster:
kubectl --kubeconfig ./kubeconfig get node -owide
You will see an output similar to:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
talos-192-168-121-203 Ready control-plane,master 3m10s v1.24.2 192.168.121.203 <none> Talos (v1.1.0) 5.15.48-talos containerd://1.6.6
talos-192-168-121-69 Ready <none> 2m25s v1.24.2 192.168.121.69 <none> Talos (v1.1.0) 5.15.48-talos containerd://1.6.6
talos-192-168-121-119 Ready control-plane,master 8m46s v1.24.2 192.168.121.119 <none> Talos (v1.1.0) 5.15.48-talos containerd://1.6.6
talos-192-168-121-125 Ready control-plane,master 3m11s v1.24.2 192.168.121.125 <none> Talos (v1.1.0) 5.15.48-talos containerd://1.6.6
Congratulations, you have a highly-available Talos cluster running!
Cleanup
You can destroy the vagrant environment by running:
vagrant destroy -f
And remove the ISO image you downloaded:
sudo rm -f /tmp/metal-amd64.iso
2.6 - VMware
Creating a Cluster via the govc
CLI
In this guide we will create an HA Kubernetes cluster with 2 worker nodes.
We will use the govc
cli which can be downloaded here.
Prereqs/Assumptions
This guide will use the virtual IP (“VIP”) functionality that is built into Talos in order to provide a stable, known IP for the Kubernetes control plane. This simply means the user should pick an IP on their “VM Network” to designate for this purpose and keep it handy for future steps.
Create the Machine Configuration Files
Generating Base Configurations
Using the VIP chosen in the prereq steps, we will now generate the base configuration files for the Talos machines.
This can be done with the talosctl gen config ...
command.
Take note that we will also use a JSON6902 patch when creating the configs so that the control plane nodes get some special information about the VIP we chose earlier, as well as a daemonset to install vmware tools on talos nodes.
First, download cp.patch.yaml
to your local machine and edit the VIP to match your chosen IP.
You can do this by issuing: curl -fsSLO https://raw.githubusercontent.com/siderolabs/talos/master/website/content/v1.10/talos-guides/install/virtualized-platforms/vmware/cp.patch.yaml
.
It’s contents should look like the following:
- op: add
path: /machine/network
value:
interfaces:
- interface: eth0
dhcp: true
vip:
ip: <VIP>
With the patch in hand, generate machine configs with:
$ talosctl gen config vmware-test https://<VIP>:<port> --config-patch-control-plane @cp.patch.yaml
created controlplane.yaml
created worker.yaml
created talosconfig
At this point, you can modify the generated configs to your liking if needed.
Optionally, you can specify additional patches by adding to the cp.patch.yaml
file downloaded earlier, or create your own patch files.
Validate the Configuration Files
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config worker.yaml --mode cloud
worker.yaml is valid for cloud mode
Set Environment Variables
govc
makes use of the following environment variables
export GOVC_URL=<vCenter url>
export GOVC_USERNAME=<vCenter username>
export GOVC_PASSWORD=<vCenter password>
Note: If your vCenter installation makes use of self signed certificates, you’ll want to export
GOVC_INSECURE=true
.
There are some additional variables that you may need to set:
export GOVC_DATACENTER=<vCenter datacenter>
export GOVC_RESOURCE_POOL=<vCenter resource pool>
export GOVC_DATASTORE=<vCenter datastore>
export GOVC_NETWORK=<vCenter network>
Choose Install Approach
As part of this guide, we have a more automated install script that handles some of the complexity of importing OVAs and creating VMs. If you wish to use this script, we will detail that next. If you wish to carry out the manual approach, simply skip ahead to the “Manual Approach” section.
Scripted Install
Download the vmware.sh
script to your local machine.
You can do this by issuing curl -fsSL "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/v1.10/talos-guides/install/virtualized-platforms/vmware/vmware.sh" | sed s/latest/v1.10.0-alpha.0/ > vmware.sh
.
This script has default variables for things like Talos version and cluster name that may be interesting to tweak before deploying.
The script downloads VMWare OVA with talos-vmtoolsd
from Image Factory extension pre-installed.
Import OVA
To create a content library and import the Talos OVA corresponding to the mentioned Talos version, simply issue:
./vmware.sh upload_ova
Create Cluster
With the OVA uploaded to the content library, you can create a 5 node (by default) cluster with 3 control plane and 2 worker nodes:
./vmware.sh create
This step will create a VM from the OVA, edit the settings based on the env variables used for VM size/specs, then power on the VMs.
You may now skip past the “Manual Approach” section down to “Bootstrap Cluster”.
Manual Approach
Import the OVA into vCenter
A talos.ova
asset is available from Image Factory.
We will refer to the version of the release as $TALOS_VERSION
below.
It can be easily exported with export TALOS_VERSION="v0.3.0-alpha.10"
or similar.
The download link already includes the talos-vmtoolsd
extension.
curl -LO https://factory.talos.dev/image/903b2da78f99adef03cbbd4df6714563823f63218508800751560d3bc3557e40/${TALOS_VERSION}/vmware-amd64.ova
Create a content library (if needed) with:
govc library.create <library name>
Import the OVA to the library with:
govc library.import -n talos-${TALOS_VERSION} <library name> /path/to/downloaded/talos.ova
Create the Bootstrap Node
We’ll clone the OVA to create the bootstrap node (our first control plane node).
govc library.deploy <library name>/talos-${TALOS_VERSION} control-plane-1
Talos makes use of the guestinfo
facility of VMware to provide the machine/cluster configuration.
This can be set using the govc vm.change
command.
To facilitate persistent storage using the vSphere cloud provider integration with Kubernetes, disk.enableUUID=1
is used.
govc vm.change \
-e "guestinfo.talos.config=$(cat controlplane.yaml | base64)" \
-e "disk.enableUUID=1" \
-vm control-plane-1
Update Hardware Resources for the Bootstrap Node
-c
is used to configure the number of cpus-m
is used to configure the amount of memory (in MB)
govc vm.change \
-c 2 \
-m 4096 \
-vm control-plane-1
The following can be used to adjust the EPHEMERAL disk size.
govc vm.disk.change -vm control-plane-1 -disk.name disk-1000-0 -size 10G
govc vm.power -on control-plane-1
Create the Remaining Control Plane Nodes
govc library.deploy <library name>/talos-${TALOS_VERSION} control-plane-2
govc vm.change \
-e "guestinfo.talos.config=$(base64 controlplane.yaml)" \
-e "disk.enableUUID=1" \
-vm control-plane-2
govc library.deploy <library name>/talos-${TALOS_VERSION} control-plane-3
govc vm.change \
-e "guestinfo.talos.config=$(base64 controlplane.yaml)" \
-e "disk.enableUUID=1" \
-vm control-plane-3
govc vm.change \
-c 2 \
-m 4096 \
-vm control-plane-2
govc vm.change \
-c 2 \
-m 4096 \
-vm control-plane-3
govc vm.disk.change -vm control-plane-2 -disk.name disk-1000-0 -size 10G
govc vm.disk.change -vm control-plane-3 -disk.name disk-1000-0 -size 10G
govc vm.power -on control-plane-2
govc vm.power -on control-plane-3
Update Settings for the Worker Nodes
govc library.deploy <library name>/talos-${TALOS_VERSION} worker-1
govc vm.change \
-e "guestinfo.talos.config=$(base64 worker.yaml)" \
-e "disk.enableUUID=1" \
-vm worker-1
govc library.deploy <library name>/talos-${TALOS_VERSION} worker-2
govc vm.change \
-e "guestinfo.talos.config=$(base64 worker.yaml)" \
-e "disk.enableUUID=1" \
-vm worker-2
govc vm.change \
-c 4 \
-m 8192 \
-vm worker-1
govc vm.change \
-c 4 \
-m 8192 \
-vm worker-2
govc vm.disk.change -vm worker-1 -disk.name disk-1000-0 -size 10G
govc vm.disk.change -vm worker-2 -disk.name disk-1000-0 -size 10G
govc vm.power -on worker-1
govc vm.power -on worker-2
Bootstrap Cluster
In the vSphere UI, open a console to one of the control plane nodes. You should see some output stating that etcd should be bootstrapped. This text should look like:
"etcd is waiting to join the cluster, if this node is the first node in the cluster, please run `talosctl bootstrap` against one of the following IPs:
Take note of the IP mentioned here and issue:
talosctl --talosconfig talosconfig bootstrap -e <control plane IP> -n <control plane IP>
Keep this IP handy for the following steps as well.
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig config endpoint <control plane IP>
talosctl --talosconfig talosconfig config node <control plane IP>
talosctl --talosconfig talosconfig kubeconfig .
Configure talos-vmtoolsd
The talos-vmtoolsd application was deployed as a daemonset as part of the cluster creation; however, we must now provide a talos credentials file for it to use.
Create a new talosconfig with:
talosctl --talosconfig talosconfig -n <control plane IP> config new vmtoolsd-secret.yaml --roles os:admin
Create a secret from the talosconfig:
kubectl -n kube-system create secret generic talos-vmtoolsd-config \
--from-file=talosconfig=./vmtoolsd-secret.yaml
Clean up the generated file from local system:
rm vmtoolsd-secret.yaml
Once configured, you should now see these daemonset pods go into “Running” state and in vCenter, you will now see IPs and info from the Talos nodes present in the UI.
2.7 - Xen
Talos is known to work on Xen. We don’t yet have a documented guide specific to Xen; however, you can follow the General Getting Started Guide. If you run into any issues, our community can probably help!
3 - Cloud Platforms
3.1 - Akamai
Creating a Talos Linux Cluster on Akamai Connected Cloud via the CLI
This guide will demonstrate how to create a highly available Kubernetes cluster with one worker using the Akamai Connected Cloud provider.
Akamai Connected Cloud has a very well-documented REST API, and an open-source CLI tool to interact with the API which will be used in this guide.
Make sure to follow installation and authentication instructions for the linode-cli
tool.
jq and talosctl also needs to be installed
Upload image
Download the Akamai image akamai-amd64.raw.gz
from Image Factory.
Upload the image
export REGION=us-ord
linode-cli image-upload --region ${REGION} --label talos akamai-amd64.raw.gz
Create a Load Balancer
export REGION=us-ord
linode-cli nodebalancers create --region ${REGION} --no-defaults --label talos
export NODEBALANCER_ID=$(linode-cli nodebalancers list --label talos --format id --text --no-headers)
linode-cli nodebalancers config-create --port 443 --protocol tcp --check connection ${NODEBALANCER_ID}
Create the Machine Configuration Files
Using the IP address (or DNS name, if you have created one) of the load balancer, generate the base configuration files for the Talos machines. Also note that the load balancer forwards port 443 to port 6443 on the associated nodes, so we should use 443 as the port in the config definition:
export NODEBALANCER_IP=$(linode-cli nodebalancers list --label talos --format ipv4 --text --no-headers)
talosctl gen config talos-kubernetes-akamai https://${NODEBALANCER_IP} --with-examples=false
Create the Linodes
Create the Control Plane Nodes
Although root passwords are not used by Talos, Linode requires that a root password be associated with a linode during creation.
Run the following commands to create three control plane nodes:
export IMAGE_ID=$(linode-cli images list --label talos --format id --text --no-headers)
export NODEBALANCER_ID=$(linode-cli nodebalancers list --label talos --format id --text --no-headers)
export NODEBALANCER_CONFIG_ID=$(linode-cli nodebalancers configs-list ${NODEBALANCER_ID} --format id --text --no-headers)
export REGION=us-ord
export LINODE_TYPE=g6-standard-4
export ROOT_PW=$(pwgen 16)
for id in $(seq 3); do
linode_label="talos-control-plane-${id}"
# create linode
linode-cli linodes create \
--no-defaults \
--root_pass ${ROOT_PW} \
--type ${LINODE_TYPE} \
--region ${REGION} \
--image ${IMAGE_ID} \
--label ${linode_label} \
--private_ip true \
--tags talos-control-plane \
--group "talos-control-plane" \
--metadata.user_data "$(base64 -i ./controlplane.yaml)"
# change kernel to "direct disk"
linode_id=$(linode-cli linodes list --label ${linode_label} --format id --text --no-headers)
confiig_id=$(linode-cli linodes configs-list ${linode_id} --format id --text --no-headers)
linode-cli linodes config-update ${linode_id} ${confiig_id} --kernel "linode/direct-disk"
# add machine to nodebalancer
private_ip=$(linode-cli linodes list --label ${linode_label} --format ipv4 --json | jq -r ".[0].ipv4[1]")
linode-cli nodebalancers node-create ${NODEBALANCER_ID} ${NODEBALANCER_CONFIG_ID} --label ${linode_label} --address ${private_ip}:6443
done
Create the Worker Nodes
Although root passwords are not used by Talos, Linode requires that a root password be associated with a linode during creation.
Run the following to create a worker node:
export IMAGE_ID=$(linode-cli images list --label talos --format id --text --no-headers)
export REGION=us-ord
export LINODE_TYPE=g6-standard-4
export LINODE_LABEL="talos-worker-1"
export ROOT_PW=$(pwgen 16)
linode-cli linodes create \
--no-defaults \
--root_pass ${ROOT_PW} \
--type ${LINODE_TYPE} \
--region ${REGION} \
--image ${IMAGE_ID} \
--label ${LINODE_LABEL} \
--private_ip true \
--tags talos-worker \
--group "talos-worker" \
--metadata.user_data "$(base64 -i ./worker.yaml)"
linode_id=$(linode-cli linodes list --label ${LINODE_LABEL} --format id --text --no-headers)
config_id=$(linode-cli linodes configs-list ${linode_id} --format id --text --no-headers)
linode-cli linodes config-update ${linode_id} ${config_id} --kernel "linode/direct-disk"
Bootstrap Etcd
Set the endpoints
and nodes
:
export LINODE_LABEL=talos-control-plane-1
export LINODE_IP=$(linode-cli linodes list --label ${LINODE_LABEL} --format ipv4 --json | jq -r ".[0].ipv4[0]")
talosctl --talosconfig talosconfig config endpoint ${LINODE_IP}
talosctl --talosconfig talosconfig config node ${LINODE_IP}
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point, we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
We can also watch the cluster bootstrap via:
talosctl --talosconfig talosconfig health
Alternatively, we can also watch the node overview, logs and real-time metrics dashboard via:
talosctl --talosconfig talosconfig dashboard
3.2 - AWS
Creating a Cluster via the AWS CLI
In this guide we will create an HA Kubernetes cluster with 3 control plane nodes across 3 availability zones. You should have an existing AWS account and have the AWS CLI installed and configured. If you need more information on AWS specifics, please see the official AWS documentation.
To install the dependencies for this tutorial you can use homebrew on macOS or Linux:
brew install siderolabs/tap/talosctl kubectl jq curl xz
If you would like to create infrastructure via terraform
or opentofu
please see the example in the contrib repository.
Note: this guide is not a production set up and steps were tested in
bash
andzsh
shells.
Create AWS Resources
We will be creating a control plane with 3 Ec2 instances spread across 3 availability zones. It is recommended to not use the default VPC so we will create a new one for this tutorial.
Change to your desired region and CIDR block and create a VPC:
Make sure your subnet does not overlap with
10.244.0.0/16
or10.96.0.0/12
the default pod and services subnets in Kubernetes.
AWS_REGION="us-west-2"
IPV4_CIDR="10.1.0.0/18"
VPC_ID=$(aws ec2 create-vpc \
--cidr-block $IPV4_CIDR \
--output text --query 'Vpc.VpcId')
Create the Subnets
Create 3 smaller CIDRs to use for each subnet in different availability zones. Make sure to adjust these CIDRs if you changed the default value from the last command.
IPV4_CIDRS=( "10.1.0.0/22" "10.1.4.0/22" "10.1.8.0/22" )
Next create a subnet in each availability zones.
Note: If you’re using zsh you need to run
setopt KSH_ARRAYS
to have arrays referenced properly.
CIDR=0
declare -a SUBNETS
AZS=($(aws ec2 describe-availability-zones \
--query 'AvailabilityZones[].ZoneName' \
--filter "Name=state,Values=available" \
--output text | tr -s '\t' '\n' | head -n3))
for AZ in ${AZS[@]}; do
SUBNETS[$CIDR]=$(aws ec2 create-subnet \
--vpc-id $VPC_ID \
--availability-zone $AZ \
--cidr-block ${IPV4_CIDRS[$CIDR]} \
--query 'Subnet.SubnetId' \
--output text)
aws ec2 modify-subnet-attribute \
--subnet-id ${SUBNETS[$CIDR]} \
--private-dns-hostname-type-on-launch resource-name
echo ${SUBNETS[$CIDR]}
((CIDR++))
done
Create an internet gateway and attach it to the VPC:
IGW_ID=$(aws ec2 create-internet-gateway \
--query 'InternetGateway.InternetGatewayId' \
--output text)
aws ec2 attach-internet-gateway \
--vpc-id $VPC_ID \
--internet-gateway-id $IGW_ID
ROUTE_TABLE_ID=$(aws ec2 describe-route-tables \
--filters "Name=vpc-id,Values=$VPC_ID" \
--query 'RouteTables[].RouteTableId' \
--output text)
aws ec2 create-route \
--route-table-id $ROUTE_TABLE_ID \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id $IGW_ID
Official AMI Images
Official AMI image ID can be found in the cloud-images.json
file attached to the Talos release.
AMI=$(curl -sL https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/cloud-images.json | \
jq -r '.[] | select(.region == "'$AWS_REGION'") | select (.arch == "amd64") | .id')
echo $AMI
If using the official AMIs, you can skip to Creating the Security group
Create your own AMIs
The use of the official Talos AMIs are recommended, but if you wish to build your own AMIs, follow the procedure below.
Create the S3 Bucket
aws s3api create-bucket \
--bucket $BUCKET \
--create-bucket-configuration LocationConstraint=$AWS_REGION \
--acl private
Create the vmimport
Role
In order to create an AMI, ensure that the vmimport
role exists as described in the official AWS documentation.
Note that the role should be associated with the S3 bucket we created above.
Create the Image Snapshot
First, download the AWS image from Image Factory:
curl -L https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0/aws-amd64.raw.xz | xz -d > disk.raw
Copy the RAW disk to S3 and import it as a snapshot:
aws s3 cp disk.raw s3://$BUCKET/talos-aws-tutorial.raw
$SNAPSHOT_ID=$(aws ec2 import-snapshot \
--region $REGION \
--description "Talos kubernetes tutorial" \
--disk-container "Format=raw,UserBucket={S3Bucket=$BUCKET,S3Key=talos-aws-tutorial.raw}" \
--query 'SnapshotId' \
--output text)
To check on the status of the import, run:
aws ec2 describe-import-snapshot-tasks \
--import-task-ids
Once the SnapshotTaskDetail.Status
indicates completed
, we can register the image.
Register the Image
AMI=$(aws ec2 register-image \
--block-device-mappings "DeviceName=/dev/xvda,VirtualName=talos,Ebs={DeleteOnTermination=true,SnapshotId=$SNAPSHOT_ID,VolumeSize=4,VolumeType=gp2}" \
--root-device-name /dev/xvda \
--virtualization-type hvm \
--architecture x86_64 \
--ena-support \
--name talos-aws-tutorial-ami \
--query 'ImageId' \
--output text)
We now have an AMI we can use to create our cluster.
Create a Security Group
SECURITY_GROUP_ID=$(aws ec2 create-security-group \
--vpc-id $VPC_ID \
--group-name talos-aws-tutorial-sg \
--description "Security Group for EC2 instances to allow ports required by Talos" \
--query 'GroupId' \
--output text)
Using the security group from above, allow all internal traffic within the same security group:
aws ec2 authorize-security-group-ingress \
--group-id $SECURITY_GROUP_ID \
--protocol all \
--port 0 \
--source-group $SECURITY_GROUP_ID
Expose the Talos (50000) and Kubernetes API.
Note: This is only required for the control plane nodes. For a production environment you would want separate private subnets for worker nodes.
aws ec2 authorize-security-group-ingress \
--group-id $SECURITY_GROUP_ID \
--ip-permissions \
IpProtocol=tcp,FromPort=50000,ToPort=50000,IpRanges="[{CidrIp=0.0.0.0/0}]" \
IpProtocol=tcp,FromPort=6443,ToPort=6443,IpRanges="[{CidrIp=0.0.0.0/0}]" \
--query 'SecurityGroupRules[].SecurityGroupRuleId' \
--output text
We will bootstrap Talos with a MachineConfig via user-data it will never be exposed to the internet without certificate authentication.
We enable KubeSpan in this tutorial so you need to allow inbound UDP for the Wireguard port:
aws ec2 authorize-security-group-ingress \
--group-id $SECURITY_GROUP_ID \
--ip-permissions \
IpProtocol=tcp,FromPort=51820,ToPort=51820,IpRanges="[{CidrIp=0.0.0.0/0}]" \
--query 'SecurityGroupRules[].SecurityGroupRuleId' \
--output text
Create a Load Balancer
The load balancer is used for a stable Kubernetes API endpoint.
LOAD_BALANCER_ARN=$(aws elbv2 create-load-balancer \
--name talos-aws-tutorial-lb \
--subnets $(echo ${SUBNETS[@]}) \
--type network \
--ip-address-type ipv4 \
--query 'LoadBalancers[].LoadBalancerArn' \
--output text)
LOAD_BALANCER_DNS=$(aws elbv2 describe-load-balancers \
--load-balancer-arns $LOAD_BALANCER_ARN \
--query 'LoadBalancers[].DNSName' \
--output text)
Now create a target group for the load balancer:
TARGET_GROUP_ARN=$(aws elbv2 create-target-group \
--name talos-aws-tutorial-tg \
--protocol TCP \
--port 6443 \
--target-type instance \
--vpc-id $VPC_ID \
--query 'TargetGroups[].TargetGroupArn' \
--output text)
LISTENER_ARN=$(aws elbv2 create-listener \
--load-balancer-arn $LOAD_BALANCER_ARN \
--protocol TCP \
--port 6443 \
--default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_ARN \
--query 'Listeners[].ListenerArn' \
--output text)
Create the Machine Configuration Files
We will create a machine config patch to use the AWS time servers. You can create additional patches to customize the configuration as needed.
cat <<EOF > time-server-patch.yaml
machine:
time:
servers:
- 169.254.169.123
EOF
Using the DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines.
talosctl gen config talos-k8s-aws-tutorial https://${LOAD_BALANCER_DNS}:6443 \
--with-examples=false \
--with-docs=false \
--with-kubespan \
--install-disk /dev/xvda \
--config-patch '@time-server-patch.yaml'
Note that the generated configs are too long for AWS userdata field if the
--with-examples
and--with-docs
flags are not passed.
Create the EC2 Instances
Note: There is a known issue that prevents Talos from running on T2 instance types. Please use T3 if you need burstable instance types.
Create the Control Plane Nodes
declare -a CP_INSTANCES
INSTANCE_INDEX=0
for SUBNET in ${SUBNETS[@]}; do
CP_INSTANCES[${INSTANCE_INDEX}]=$(aws ec2 run-instances \
--image-id $AMI \
--subnet-id $SUBNET \
--instance-type t3.small \
--user-data file://controlplane.yaml \
--associate-public-ip-address \
--security-group-ids $SECURITY_GROUP_ID \
--count 1 \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-cp-$INSTANCE_INDEX}]" \
--query 'Instances[].InstanceId' \
--output text)
echo ${CP_INSTANCES[${INSTANCE_INDEX}]}
((INSTANCE_INDEX++))
done
Create the Worker Nodes
For the worker nodes we will create a new launch template with the worker.yaml
machine configuration and create an autoscaling group.
WORKER_LAUNCH_TEMPLATE_ID=$(aws ec2 create-launch-template \
--launch-template-name talos-aws-tutorial-worker \
--launch-template-data '{
"ImageId":"'$AMI'",
"InstanceType":"t3.small",
"UserData":"'$(base64 -w0 worker.yaml)'",
"NetworkInterfaces":[{
"DeviceIndex":0,
"AssociatePublicIpAddress":true,
"Groups":["'$SECURITY_GROUP_ID'"],
"DeleteOnTermination":true
}],
"BlockDeviceMappings":[{
"DeviceName":"/dev/xvda",
"Ebs":{
"VolumeSize":20,
"VolumeType":"gp3",
"DeleteOnTermination":true
}
}],
"TagSpecifications":[{
"ResourceType":"instance",
"Tags":[{
"Key":"Name",
"Value":"talos-aws-tutorial-worker"
}]
}]}' \
--query 'LaunchTemplate.LaunchTemplateId' \
--output text)
aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name talos-aws-tutorial-worker \
--min-size 1 \
--max-size 3 \
--desired-capacity 1 \
--availability-zones $(echo ${AZS[@]}) \
--target-group-arns $TARGET_GROUP_ARN \
--launch-template "LaunchTemplateId=${WORKER_LAUNCH_TEMPLATE_ID}" \
--vpc-zone-identifier $(echo ${SUBNETS[@]} | tr ' ' ',')
Configure the Load Balancer
Now, using the load balancer target group’s ARN, and the PrivateIpAddress from the controlplane instances that you created :
for INSTANCE in ${CP_INSTANCES[@]}; do
aws elbv2 register-targets \
--target-group-arn $TARGET_GROUP_ARN \
--targets Id=$(aws ec2 describe-instances \
--instance-ids $INSTANCE \
--query 'Reservations[].Instances[].InstanceId' \
--output text)
done
Export the talosconfig
file
Export the talosconfig
file so commands sent to Talos will be authenticated.
export TALOSCONFIG=$(pwd)/talosconfig
Bootstrap etcd
WORKER_INSTANCES=( $(aws autoscaling \
describe-auto-scaling-instances \
--query 'AutoScalingInstances[?AutoScalingGroupName==`talos-aws-tutorial-worker`].InstanceId' \
--output text) )
Set the endpoints
(the control plane node to which talosctl
commands are sent) and nodes
(the nodes that the command operates on):
talosctl config endpoints $(aws ec2 describe-instances \
--instance-ids ${CP_INSTANCES[*]} \
--query 'Reservations[].Instances[].PublicIpAddress' \
--output text)
talosctl config nodes $(aws ec2 describe-instances \
--instance-ids $(echo ${CP_INSTANCES[1]}) \
--query 'Reservations[].Instances[].PublicIpAddress' \
--output text)
Bootstrap etcd
:
talosctl bootstrap
You can now watch as your cluster bootstraps, by using
talosctl health
This command will take a few minutes for the nodes to start etcd, reach quarom and start the Kubernetes control plane.
You can also watch the performance of a node, via:
talosctl dashboard
Retrieve the kubeconfig
When the cluster is healthy you can retrieve the admin kubeconfig
by running:
talosctl kubeconfig .
export KUBECONFIG=$(pwd)/kubeconfig
And use standard kubectl
commands.
kubectl get nodes
Cleanup resources
If you would like to delete all of the resources you created during this tutorial you can run the following commands.
aws elbv2 delete-listener --listener-arn $LISTENER_ARN
aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP_ARN
aws elbv2 delete-load-balancer --load-balancer-arn $LOAD_BALANCER_ARN
aws autoscaling update-auto-scaling-group \
--auto-scaling-group-name talos-aws-tutorial-worker \
--min-size 0 \
--max-size 0 \
--desired-capacity 0
aws ec2 terminate-instances --instance-ids ${CP_INSTANCES[@]} ${WORKER_INSTANCES[@]} \
--query 'TerminatingInstances[].InstanceId' \
--output text
aws autoscaling delete-auto-scaling-group \
--auto-scaling-group-name talos-aws-tutorial-worker \
--force-delete
aws ec2 delete-launch-template --launch-template-id $WORKER_LAUNCH_TEMPLATE_ID
while $(aws ec2 describe-instances \
--instance-ids ${CP_INSTANCES[@]} ${WORKER_INSTANCES[@]} \
--query 'Reservations[].Instances[].[InstanceId,State.Name]' \
--output text | grep -q shutting-down); do \
echo "waiting for instances to terminate"; sleep 5s
done
aws ec2 detach-internet-gateway --vpc-id $VPC_ID --internet-gateway-id $IGW_ID
aws ec2 delete-internet-gateway --internet-gateway-id $IGW_ID
aws ec2 delete-security-group --group-id $SECURITY_GROUP_ID
for SUBNET in ${SUBNETS[@]}; do
aws ec2 delete-subnet --subnet-id $SUBNET
done
aws ec2 delete-vpc --vpc-id $VPC_ID
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig time-server-patch.yaml disk.raw
3.3 - Azure
Creating a Cluster via the CLI
In this guide we will create an HA Kubernetes cluster with 1 worker node. We assume existing Blob Storage, and some familiarity with Azure. If you need more information on Azure specifics, please see the official Azure documentation.
Environment Setup
We’ll make use of the following environment variables throughout the setup. Edit the variables below with your correct information.
# Storage account to use
export STORAGE_ACCOUNT="StorageAccountName"
# Storage container to upload to
export STORAGE_CONTAINER="StorageContainerName"
# Resource group name
export GROUP="ResourceGroupName"
# Location
export LOCATION="centralus"
# Get storage account connection string based on info above
export CONNECTION=$(az storage account show-connection-string \
-n $STORAGE_ACCOUNT \
-g $GROUP \
-o tsv)
Choose an Image
There are two methods of deployment in this tutorial.
If you would like to use the official Talos image uploaded to Azure Community Galleries by SideroLabs, you may skip ahead to setting up your network infrastructure.
Otherwise, if you would like to upload your own image to Azure and use it to deploy Talos, continue to Creating an Image.
Create the Image
First, download the Azure image from Image Factory.
Once downloaded, untar with tar -xvf /path/to/azure-amd64.tar.gz
Upload the VHD
Once you have pulled down the image, you can upload it to blob storage with:
az storage blob upload \
--connection-string $CONNECTION \
--container-name $STORAGE_CONTAINER \
-f /path/to/extracted/talos-azure.vhd \
-n talos-azure.vhd
Register the Image
Now that the image is present in our blob storage, we’ll register it.
az image create \
--name talos \
--source https://$STORAGE_ACCOUNT.blob.core.windows.net/$STORAGE_CONTAINER/talos-azure.vhd \
--os-type linux \
-g $GROUP
Network Infrastructure
Virtual Networks and Security Groups
Once the image is prepared, we’ll want to work through setting up the network. Issue the following to create a network security group and add rules to it.
# Create vnet
az network vnet create \
--resource-group $GROUP \
--location $LOCATION \
--name talos-vnet \
--subnet-name talos-subnet
# Create network security group
az network nsg create -g $GROUP -n talos-sg
# Client -> apid
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n apid \
--priority 1001 \
--destination-port-ranges 50000 \
--direction inbound
# Trustd
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n trustd \
--priority 1002 \
--destination-port-ranges 50001 \
--direction inbound
# etcd
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n etcd \
--priority 1003 \
--destination-port-ranges 2379-2380 \
--direction inbound
# Kubernetes API Server
az network nsg rule create \
-g $GROUP \
--nsg-name talos-sg \
-n kube \
--priority 1004 \
--destination-port-ranges 6443 \
--direction inbound
Load Balancer
We will create a public ip, load balancer, and a health check that we will use for our control plane.
# Create public ip
az network public-ip create \
--resource-group $GROUP \
--name talos-public-ip \
--allocation-method static
# Create lb
az network lb create \
--resource-group $GROUP \
--name talos-lb \
--public-ip-address talos-public-ip \
--frontend-ip-name talos-fe \
--backend-pool-name talos-be-pool
# Create health check
az network lb probe create \
--resource-group $GROUP \
--lb-name talos-lb \
--name talos-lb-health \
--protocol tcp \
--port 6443
# Create lb rule for 6443
az network lb rule create \
--resource-group $GROUP \
--lb-name talos-lb \
--name talos-6443 \
--protocol tcp \
--frontend-ip-name talos-fe \
--frontend-port 6443 \
--backend-pool-name talos-be-pool \
--backend-port 6443 \
--probe-name talos-lb-health
Network Interfaces
In Azure, we have to pre-create the NICs for our control plane so that they can be associated with our load balancer.
for i in $( seq 0 1 2 ); do
# Create public IP for each nic
az network public-ip create \
--resource-group $GROUP \
--name talos-controlplane-public-ip-$i \
--allocation-method static
# Create nic
az network nic create \
--resource-group $GROUP \
--name talos-controlplane-nic-$i \
--vnet-name talos-vnet \
--subnet talos-subnet \
--network-security-group talos-sg \
--public-ip-address talos-controlplane-public-ip-$i\
--lb-name talos-lb \
--lb-address-pools talos-be-pool
done
# NOTES:
# Talos can detect PublicIPs automatically if PublicIP SKU is Basic.
# Use `--sku Basic` to set SKU to Basic.
Cluster Configuration
With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.
LB_PUBLIC_IP=$(az network public-ip show \
--resource-group $GROUP \
--name talos-public-ip \
--query "ipAddress" \
--output tsv)
talosctl gen config talos-k8s-azure-tutorial https://${LB_PUBLIC_IP}:6443
Compute Creation
We are now ready to create our azure nodes.
Azure allows you to pass Talos machine configuration to the virtual machine at bootstrap time via
user-data
or custom-data
methods.
Talos supports only custom-data
method, machine configuration is available to the VM only on the first boot.
Use the steps below depending on whether you have manually uploaded a Talos image or if you are using the Community Gallery image.
Manual Image Upload
# Create availability set
az vm availability-set create \
--name talos-controlplane-av-set \
-g $GROUP
# Create the controlplane nodes
for i in $( seq 0 1 2 ); do
az vm create \
--name talos-controlplane-$i \
--image talos \
--custom-data ./controlplane.yaml \
-g $GROUP \
--admin-username talos \
--generate-ssh-keys \
--verbose \
--boot-diagnostics-storage $STORAGE_ACCOUNT \
--os-disk-size-gb 20 \
--nics talos-controlplane-nic-$i \
--availability-set talos-controlplane-av-set \
--no-wait
done
# Create worker node
az vm create \
--name talos-worker-0 \
--image talos \
--vnet-name talos-vnet \
--subnet talos-subnet \
--custom-data ./worker.yaml \
-g $GROUP \
--admin-username talos \
--generate-ssh-keys \
--verbose \
--boot-diagnostics-storage $STORAGE_ACCOUNT \
--nsg talos-sg \
--os-disk-size-gb 20 \
--no-wait
# NOTES:
# `--admin-username` and `--generate-ssh-keys` are required by the az cli,
# but are not actually used by talos
# `--os-disk-size-gb` is the backing disk for Kubernetes and any workload containers
# `--boot-diagnostics-storage` is to enable console output which may be necessary
# for troubleshooting
Azure Community Gallery Image
Talos is updated in Azure’s Community Galleries (Preview) on every release.
To use the Talos image for the current release create the following environment variables.
Edit the variables below if you would like to use a different architecture
or version
.
# The architecture you would like to use. Options are "talos-x64" or "talos-arm64"
ARCHITECTURE="talos-x64"
# This will use the latest version of Talos. The version must be "latest" or in the format Major(int).Minor(int).Patch(int), e.g. 1.5.0
VERSION="latest"
Create the Virtual Machines.
# Create availability set
az vm availability-set create \
--name talos-controlplane-av-set \
-g $GROUP
# Create the controlplane nodes
for i in $( seq 0 1 2 ); do
az vm create \
--name talos-controlplane-$i \
--image /CommunityGalleries/siderolabs-c4d707c0-343e-42de-b597-276e4f7a5b0b/Images/${ARCHITECTURE}/Versions/${VERSION} \
--custom-data ./controlplane.yaml \
-g $GROUP \
--admin-username talos \
--generate-ssh-keys \
--verbose \
--boot-diagnostics-storage $STORAGE_ACCOUNT \
--os-disk-size-gb 20 \
--nics talos-controlplane-nic-$i \
--availability-set talos-controlplane-av-set \
--no-wait
done
# Create worker node
az vm create \
--name talos-worker-0 \
--image /CommunityGalleries/siderolabs-c4d707c0-343e-42de-b597-276e4f7a5b0b/Images/${ARCHITECTURE}/Versions/${VERSION} \
--vnet-name talos-vnet \
--subnet talos-subnet \
--custom-data ./worker.yaml \
-g $GROUP \
--admin-username talos \
--generate-ssh-keys \
--verbose \
--boot-diagnostics-storage $STORAGE_ACCOUNT \
--nsg talos-sg \
--os-disk-size-gb 20 \
--no-wait
# NOTES:
# `--admin-username` and `--generate-ssh-keys` are required by the az cli,
# but are not actually used by talos
# `--os-disk-size-gb` is the backing disk for Kubernetes and any workload containers
# `--boot-diagnostics-storage` is to enable console output which may be necessary
# for troubleshooting
Bootstrap Etcd
You should now be able to interact with your cluster with talosctl
.
We will need to discover the public IP for our first control plane node first.
CONTROL_PLANE_0_IP=$(az network public-ip show \
--resource-group $GROUP \
--name talos-controlplane-public-ip-0 \
--query "ipAddress" \
--output tsv)
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint $CONTROL_PLANE_0_IP
talosctl --talosconfig talosconfig config node $CONTROL_PLANE_0_IP
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
3.4 - CloudStack
Creating a Talos Linux Cluster on Apache CloudStack via the CMK CLI
In this guide we will create an single node Kubernetes cluster in Apache CloudStack.
We assume Apache CloudStack is already running in a basic configuration - and some familiarity with Apache CloudStack.
We will be using the CloudStack Cloudmonkey CLI tool.
Please see the official Apache CloudStack documentation for information related to Apache CloudStack.
Obtain the Talos Image
Download the Talos CloudStack image cloudstack-amd64.raw.gz
from the Image Factory.
Note: the minimum version of Talos required to support Apache CloudStack is v1.8.0.
Using an upload method of your choice, upload the image to a Apache CloudStack.
You might be able to use the “Register Template from URL” to download the image directly from the Image Factory.
Note: CloudStack does not seem to like compressed images, so you might have to download the image to a local webserver, uncompress it and let CloudStack fetch the image from there instead. Alternatively, you can try to remove
.gz
from URL to fetch an uncompressed image from the Image Factory.
Get Required Variables
Next we will get a number of required variables and export them for later use:
Get Image Template ID
$ cmk list templates templatefilter=self | jq -r '.template[] | [.id, .name] | @tsv' | sort -k2
01813d29-1253-4080-8d29-d405d94148af Talos 1.8.0
...
$ export IMAGE_ID=01813d29-1253-4080-8d29-d405d94148af
Get Zone ID
Get a list of Zones and select the relevant zone
$ cmk list zones | jq -r '.zone[] | [.id, .name] | @tsv' | sort -k2
a8c71a6f-2e09-41ed-8754-2d4dd8783920 fsn1
9d38497b-d810-42ab-a772-e596994d21d2 fsn2
...
$ export ZONE_ID=a8c71a6f-2e09-41ed-8754-2d4dd8783920
Get Service Offering ID
Get a list of service offerings (instance types) and select the desired offering
$ cmk list serviceofferings | jq -r '.serviceoffering[] | [.id, .memory, .cpunumber, .name] | @tsv' | sort -k4
82ac8c87-22ee-4ec3-8003-c80b09efe02c 2048 2 K8S-CP-S
c7f5253e-e1f1-4e33-a45e-eb2ebbc65fd4 4096 2 K8S-WRK-S
...
$ export SERVICEOFFERING_ID=82ac8c87-22ee-4ec3-8003-c80b09efe02c
Get Network ID
Get a list of networks and select the relevant network for your cluster.
$ cmk list networks zoneid=${ZONE_ID} | jq -r '.network[] | [.id, .type, .name] | @tsv' | sort -k3
f706984f-9dd1-4cb8-9493-3fba1f0de7e3 Isolate demo
143ed8f1-3cc5-4ba2-8717-457ad993cf25 Isolated talos
...
$ export NETWORK_ID=143ed8f1-3cc5-4ba2-8717-457ad993cf25
Get next free Public IP address and ID
To create a loadbalancer for the K8S API Endpoint, find the next available public IP address in the zone.
(In this test environment, the 10.0.0.0/24 RFC-1918 IP range has been configured as “Public IP addresses”)
$ cmk list publicipaddresses zoneid=${ZONE_ID} state=free forvirtualnetwork=true | jq -r '.publicipaddress[] | [.id, .ipaddress] | @tsv' | sort -k2
1901d946-3797-48aa-a113-8fb730b0770a 10.0.0.102
fa207d0e-c8f8-4f09-80f0-d45a6aac77eb 10.0.0.103
aa397291-f5dc-4903-b299-277161b406cb 10.0.0.104
...
$ export PUBLIC_IPADDRESS=10.0.0.102
$ export PUBLIC_IPADDRESS_ID=1901d946-3797-48aa-a113-8fb730b0770a
Acquire and Associate Public IP Address
Acquire and associate the public IP address with the network we selected earlier.
$ cmk associateIpAddress ipaddress=${PUBLIC_IPADDRESS} networkid=${NETWORK_ID}
{
"ipaddress": {
...,
"ipaddress": "10.0.0.102",
...
}
}
Create LB and FW rule using the Public IP Address
Create a Loadbalancer for the K8S API Endpoint.
Note: The “create loadbalancerrule” also takes care of creating a corresponding firewallrule.
$ cmk create loadbalancerrule algorithm=roundrobin name="k8s-api" privateport=6443 publicport=6443 openfirewall=true publicipid=${PUBLIC_IPADDRESS_ID} cidrlist=0.0.0.0/0
{
"loadbalancer": {
...
"name": "k8s-api",
"networkid": "143ed8f1-3cc5-4ba2-8717-457ad993cf25",
"privateport": "6443",
"publicip": "10.0.0.102",
"publicipid": "1901d946-3797-48aa-a113-8fb730b0770a",
"publicport": "6443",
...
}
}
Create the Talos Configuration Files
Finally it’s time to generate the Talos configuration files, using the Public IP address assigned to the loadbalancer.
$ talosctl gen config talos-cloudstack https://${PUBLIC_IPADDRESS}:6443 --with-docs=false --with-examples=false
created controlplane.yaml
created worker.yaml
created talosconfig
Make any adjustments to the controlplane.yaml
and/or worker.yaml
as you like.
Note: Remember to validate!
Create Talos VM
Next we will create the actual VM and supply the controlplane.yaml
as base64 encoded userdata
.
$ cmk deploy virtualmachine zoneid=${ZONE_ID} templateid=${IMAGE_ID} serviceofferingid=${SERVICEOFFERING_ID} networkIds=${NETWORK_ID} name=talosdemo usersdata=$(base64 controlplane.yaml | tr -d '\n')
{
"virtualmachine": {
"account": "admin",
"affinitygroup": [],
"cpunumber": 2,
"cpuspeed": 2000,
"cpuused": "0.3%",
...
}
}
Get Talos VM ID and Internal IP address
Get the ID of our newly created VM. (Also available in the full output of the above command.)
$ cmk list virtualmachines | jq -r '.virtualmachine[] | [.id, .ipaddress, .name]|@tsv' | sort -k3
9c119627-cb38-4b64-876b-ca2b79820b5a 10.1.1.154 srv03
545099fc-ec2d-4f32-915d-b0c821cfb634 10.1.1.97 srv04
d37aeca4-7d1f-45cd-9a4d-97fdbf535aa1 10.1.1.243 talosdemo
$ export VM_ID=d37aeca4-7d1f-45cd-9a4d-97fdbf535aa1
$ export VM_IP=10.1.1.243
Get Load Balancer ID
Obtain the ID of the loadbalancerrule
we created earlier.
$ cmk list loadbalancerrules | jq -r '.loadbalancerrule[]| [.id, .publicip, .name] | @tsv' | sort -k2
ede6b711-b6bc-4ade-9e48-4b3f5aa59934 10.0.0.102 k8s-api
1bad3c46-96fa-4f50-a4fc-9a46a54bc350 10.0.0.197 ac0b5d98cf6a24d55a4fb2f9e240c473-tcp-443
$ export LB_RULE_ID=ede6b711-b6bc-4ade-9e48-4b3f5aa59934
Assign Talos VM to Load Balancer
With the ID of the VM and the load balancer, we can assign the VM to the loadbalancerrule
, making the K8S API endpoint available via the Load Balancer
cmk assigntoloadbalancerrule id=${LB_RULE_ID} virtualmachineids=${VM_ID}
Bootstrap Etcd
Once the Talos VM has booted, it time to bootstrap etcd.
Configure talosctl
with IP addresses of the control plane node’s IP address.
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint ${VM_IP}
talosctl --talosconfig talosconfig config node ${VM_IP}
Next, bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
We can also watch the cluster bootstrap via:
talosctl --talosconfig talosconfig dashboard
3.5 - DigitalOcean
Creating a Talos Linux Cluster on Digital Ocean via the CLI
In this guide we will create an HA Kubernetes cluster with 1 worker node, in the NYC region. We assume an existing Space, and some familiarity with DigitalOcean. If you need more information on DigitalOcean specifics, please see the official DigitalOcean documentation.
Create the Image
Download the DigitalOcean image digital-ocean-amd64.raw.gz
from the Image Factory.
Note: the minimum version of Talos required to support Digital Ocean is v1.3.3.
Using an upload method of your choice (doctl
does not have Spaces support), upload the image to a space.
(It’s easy to drag the image file to the space using DigitalOcean’s web console.)
Note: Make sure you upload the file as public
.
Now, create an image using the URL of the uploaded image:
export REGION=nyc3
doctl compute image create \
--region $REGION \
--image-description talos-digital-ocean-tutorial \
--image-url https://$SPACENAME.$REGION.digitaloceanspaces.com/digital-ocean-amd64.raw.gz \
Talos
Save the image ID. We will need it when creating droplets.
Create a Load Balancer
doctl compute load-balancer create \
--region $REGION \
--name talos-digital-ocean-tutorial-lb \
--tag-name talos-digital-ocean-tutorial-control-plane \
--health-check protocol:tcp,port:6443,check_interval_seconds:10,response_timeout_seconds:5,healthy_threshold:5,unhealthy_threshold:3 \
--forwarding-rules entry_protocol:tcp,entry_port:443,target_protocol:tcp,target_port:6443
Note the returned ID of the load balancer.
We will need the IP of the load balancer. Using the ID of the load balancer, run:
doctl compute load-balancer get --format IP <load balancer ID>
Note that it may take a few minutes before the load balancer is provisioned, so repeat this command until it returns with the IP address.
Create the Machine Configuration Files
Using the IP address (or DNS name, if you have created one) of the loadbalancer, generate the base configuration files for the Talos machines. Also note that the load balancer forwards port 443 to port 6443 on the associated nodes, so we should use 443 as the port in the config definition:
$ talosctl gen config talos-k8s-digital-ocean-tutorial https://<load balancer IP or DNS>:443
created controlplane.yaml
created worker.yaml
created talosconfig
Create the Droplets
Create a dummy SSH key
Although SSH is not used by Talos, DigitalOcean requires that an SSH key be associated with a droplet during creation. We will create a dummy key that can be used to satisfy this requirement.
doctl compute ssh-key create --public-key "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDbl0I1s/yOETIKjFr7mDLp8LmJn6OIZ68ILjVCkoN6lzKmvZEqEm1YYeWoI0xgb80hQ1fKkl0usW6MkSqwrijoUENhGFd6L16WFL53va4aeJjj2pxrjOr3uBFm/4ATvIfFTNVs+VUzFZ0eGzTgu1yXydX8lZMWnT4JpsMraHD3/qPP+pgyNuI51LjOCG0gVCzjl8NoGaQuKnl8KqbSCARIpETg1mMw+tuYgaKcbqYCMbxggaEKA0ixJ2MpFC/kwm3PcksTGqVBzp3+iE5AlRe1tnbr6GhgT839KLhOB03j7lFl1K9j1bMTOEj5Io8z7xo/XeF2ZQKHFWygAJiAhmKJ dummy@dummy.local" dummy
Note the ssh key ID that is returned - we will use it in creating the droplets.
Create the Control Plane Nodes
Run the following commands to create three control plane nodes:
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--tag-names talos-digital-ocean-tutorial-control-plane \
--user-data-file controlplane.yaml \
--ssh-keys <ssh key ID> \
talos-control-plane-1
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--tag-names talos-digital-ocean-tutorial-control-plane \
--user-data-file controlplane.yaml \
--ssh-keys <ssh key ID> \
talos-control-plane-2
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--tag-names talos-digital-ocean-tutorial-control-plane \
--user-data-file controlplane.yaml \
--ssh-keys <ssh key ID> \
talos-control-plane-3
Note the droplet ID returned for the first control plane node.
Create the Worker Nodes
Run the following to create a worker node:
doctl compute droplet create \
--region $REGION \
--image <image ID> \
--size s-2vcpu-4gb \
--enable-private-networking \
--user-data-file worker.yaml \
--ssh-keys <ssh key ID> \
talos-worker-1
Bootstrap Etcd
To configure talosctl
we will need the first control plane node’s IP:
doctl compute droplet get --format PublicIPv4 <droplet ID>
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
We can also watch the cluster bootstrap via:
talosctl --talosconfig talosconfig health
3.6 - Exoscale
Talos is known to work on exoscale.com; however, it is currently undocumented.
3.7 - GCP
Creating a Cluster via the CLI
In this guide, we will create an HA Kubernetes cluster in GCP with 1 worker node. We will assume an existing Cloud Storage bucket, and some familiarity with Google Cloud. If you need more information on Google Cloud specifics, please see the official Google documentation.
jq and talosctl also needs to be installed
Manual Setup
Environment Setup
We’ll make use of the following environment variables throughout the setup. Edit the variables below with your correct information.
# Storage account to use
export STORAGE_BUCKET="StorageBucketName"
# Region
export REGION="us-central1"
Create the Image
First, download the Google Cloud image from Image Factory.
These images are called gcp-$ARCH.tar.gz
.
Upload the Image
Once you have downloaded the image, you can upload it to your storage bucket with:
gsutil cp /path/to/gcp-amd64.tar.gz gs://$STORAGE_BUCKET
Register the image
Now that the image is present in our bucket, we’ll register it.
gcloud compute images create talos \
--source-uri=gs://$STORAGE_BUCKET/gcp-amd64.tar.gz \
--guest-os-features=VIRTIO_SCSI_MULTIQUEUE
Network Infrastructure
Load Balancers and Firewalls
Once the image is prepared, we’ll want to work through setting up the network. Issue the following to create a firewall, load balancer, and their required components.
130.211.0.0/22
and 35.191.0.0/16
are the GCP Load Balancer IP ranges
# Create Instance Group
gcloud compute instance-groups unmanaged create talos-ig \
--zone $REGION-b
# Create port for IG
gcloud compute instance-groups set-named-ports talos-ig \
--named-ports tcp6443:6443 \
--zone $REGION-b
# Create health check
gcloud compute health-checks create tcp talos-health-check --port 6443
# Create backend
gcloud compute backend-services create talos-be \
--global \
--protocol TCP \
--health-checks talos-health-check \
--timeout 5m \
--port-name tcp6443
# Add instance group to backend
gcloud compute backend-services add-backend talos-be \
--global \
--instance-group talos-ig \
--instance-group-zone $REGION-b
# Create tcp proxy
gcloud compute target-tcp-proxies create talos-tcp-proxy \
--backend-service talos-be \
--proxy-header NONE
# Create LB IP
gcloud compute addresses create talos-lb-ip --global
# Forward 443 from LB IP to tcp proxy
gcloud compute forwarding-rules create talos-fwd-rule \
--global \
--ports 443 \
--address talos-lb-ip \
--target-tcp-proxy talos-tcp-proxy
# Create firewall rule for health checks
gcloud compute firewall-rules create talos-controlplane-firewall \
--source-ranges 130.211.0.0/22,35.191.0.0/16 \
--target-tags talos-controlplane \
--allow tcp:6443
# Create firewall rule to allow talosctl access
gcloud compute firewall-rules create talos-controlplane-talosctl \
--source-ranges 0.0.0.0/0 \
--target-tags talos-controlplane \
--allow tcp:50000
Cluster Configuration
With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.
LB_PUBLIC_IP=$(gcloud compute forwarding-rules describe talos-fwd-rule \
--global \
--format json \
| jq -r .IPAddress)
talosctl gen config talos-k8s-gcp-tutorial https://${LB_PUBLIC_IP}:443
Additionally, you can specify --config-patch
with RFC6902 jsonpatch which will be applied during the config generation.
Compute Creation
We are now ready to create our GCP nodes.
# Create the control plane nodes.
for i in $( seq 0 2 ); do
gcloud compute instances create talos-controlplane-$i \
--image talos \
--zone $REGION-b \
--tags talos-controlplane,talos-controlplane-$i \
--boot-disk-size 20GB \
--metadata-from-file=user-data=./controlplane.yaml
done
# Add control plane nodes to instance group
for i in $( seq 0 2 ); do
gcloud compute instance-groups unmanaged add-instances talos-ig \
--zone $REGION-b \
--instances talos-controlplane-$i
done
# Create worker
gcloud compute instances create talos-worker-0 \
--image talos \
--zone $REGION-b \
--boot-disk-size 20GB \
--metadata-from-file=user-data=./worker.yaml \
--tags talos-worker-$i
Bootstrap Etcd
You should now be able to interact with your cluster with talosctl
.
We will need to discover the public IP for our first control plane node first.
CONTROL_PLANE_0_IP=$(gcloud compute instances describe talos-controlplane-0 \
--zone $REGION-b \
--format json \
| jq -r '.networkInterfaces[0].accessConfigs[0].natIP')
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint $CONTROL_PLANE_0_IP
talosctl --talosconfig talosconfig config node $CONTROL_PLANE_0_IP
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
Cleanup
# cleanup VM's
gcloud compute instances delete \
talos-worker-0 \
talos-controlplane-0 \
talos-controlplane-1 \
talos-controlplane-2
# cleanup firewall rules
gcloud compute firewall-rules delete \
talos-controlplane-talosctl \
talos-controlplane-firewall
# cleanup forwarding rules
gcloud compute forwarding-rules delete \
talos-fwd-rule
# cleanup addresses
gcloud compute addresses delete \
talos-lb-ip
# cleanup proxies
gcloud compute target-tcp-proxies delete \
talos-tcp-proxy
# cleanup backend services
gcloud compute backend-services delete \
talos-be
# cleanup health checks
gcloud compute health-checks delete \
talos-health-check
# cleanup unmanaged instance groups
gcloud compute instance-groups unmanaged delete \
talos-ig
# cleanup Talos image
gcloud compute images delete \
talos
Using GCP Deployment manager
Using GCP deployment manager automatically creates a Google Storage bucket and uploads the Talos image to it.
Once the deployment is complete the generated talosconfig
and kubeconfig
files are uploaded to the bucket.
By default this setup creates a three node control plane and a single worker in us-west1-b
First we need to create a folder to store our deployment manifests and perform all subsequent operations from that folder.
mkdir -p talos-gcp-deployment
cd talos-gcp-deployment
Getting the deployment manifests
We need to download two deployment manifests for the deployment from the Talos github repository.
curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/v1.10/talos-guides/install/cloud-platforms/gcp/config.yaml"
curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/v1.10/talos-guides/install/cloud-platforms/gcp/talos-ha.jinja"
# if using ccm
curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/v1.10/talos-guides/install/cloud-platforms/gcp/gcp-ccm.yaml"
Updating the config
Now we need to update the local config.yaml
file with any required changes such as changing the default zone, Talos version, machine sizes, nodes count etc.
An example config.yaml
file is shown below:
imports:
- path: talos-ha.jinja
resources:
- name: talos-ha
type: talos-ha.jinja
properties:
zone: us-west1-b
talosVersion: v1.10.0-alpha.0
externalCloudProvider: false
controlPlaneNodeCount: 5
controlPlaneNodeType: n1-standard-1
workerNodeCount: 3
workerNodeType: n1-standard-1
outputs:
- name: bucketName
value: $(ref.talos-ha.bucketName)
Enabling external cloud provider
Note: The externalCloudProvider
property is set to false
by default.
The manifest used for deploying the ccm (cloud controller manager) is currently using the GCP ccm provided by openshift since there are no public images for the ccm yet.
Since the routes controller is disabled while deploying the CCM, the CNI pods needs to be restarted after the CCM deployment is complete to remove the
node.kubernetes.io/network-unavailable
taint. See Nodes network-unavailable taint not removed after installing ccm for more information
Use a custom built image for the ccm deployment if required.
Creating the deployment
Now we are ready to create the deployment.
Confirm with y
for any prompts.
Run the following command to create the deployment:
# use a unique name for the deployment, resources are prefixed with the deployment name
export DEPLOYMENT_NAME="<deployment name>"
gcloud deployment-manager deployments create "${DEPLOYMENT_NAME}" --config config.yaml
Retrieving the outputs
First we need to get the deployment outputs.
# first get the outputs
OUTPUTS=$(gcloud deployment-manager deployments describe "${DEPLOYMENT_NAME}" --format json | jq '.outputs[]')
BUCKET_NAME=$(jq -r '. | select(.name == "bucketName").finalValue' <<< "${OUTPUTS}")
# used when cloud controller is enabled
SERVICE_ACCOUNT=$(jq -r '. | select(.name == "serviceAccount").finalValue' <<< "${OUTPUTS}")
PROJECT=$(jq -r '. | select(.name == "project").finalValue' <<< "${OUTPUTS}")
Note: If cloud controller manager is enabled, the below command needs to be run to allow the controller custom role to access cloud resources
gcloud projects add-iam-policy-binding \
"${PROJECT}" \
--member "serviceAccount:${SERVICE_ACCOUNT}" \
--role roles/iam.serviceAccountUser
gcloud projects add-iam-policy-binding \
"${PROJECT}" \
--member serviceAccount:"${SERVICE_ACCOUNT}" \
--role roles/compute.admin
gcloud projects add-iam-policy-binding \
"${PROJECT}" \
--member serviceAccount:"${SERVICE_ACCOUNT}" \
--role roles/compute.loadBalancerAdmin
Downloading talos and kube config
In addition to the talosconfig
and kubeconfig
files, the storage bucket contains the controlplane.yaml
and worker.yaml
files used to join additional nodes to the cluster.
gsutil cp "gs://${BUCKET_NAME}/generated/talosconfig" .
gsutil cp "gs://${BUCKET_NAME}/generated/kubeconfig" .
Deploying the cloud controller manager
kubectl \
--kubeconfig kubeconfig \
--namespace kube-system \
apply \
--filename gcp-ccm.yaml
# wait for the ccm to be up
kubectl \
--kubeconfig kubeconfig \
--namespace kube-system \
rollout status \
daemonset cloud-controller-manager
If the cloud controller manager is enabled, we need to restart the CNI pods to remove the node.kubernetes.io/network-unavailable
taint.
# restart the CNI pods, in this case flannel
kubectl \
--kubeconfig kubeconfig \
--namespace kube-system \
rollout restart \
daemonset kube-flannel
# wait for the pods to be restarted
kubectl \
--kubeconfig kubeconfig \
--namespace kube-system \
rollout status \
daemonset kube-flannel
Check cluster status
kubectl \
--kubeconfig kubeconfig \
get nodes
Cleanup deployment
Warning: This will delete the deployment and all resources associated with it.
Run below if cloud controller manager is enabled
gcloud projects remove-iam-policy-binding \
"${PROJECT}" \
--member "serviceAccount:${SERVICE_ACCOUNT}" \
--role roles/iam.serviceAccountUser
gcloud projects remove-iam-policy-binding \
"${PROJECT}" \
--member serviceAccount:"${SERVICE_ACCOUNT}" \
--role roles/compute.admin
gcloud projects remove-iam-policy-binding \
"${PROJECT}" \
--member serviceAccount:"${SERVICE_ACCOUNT}" \
--role roles/compute.loadBalancerAdmin
Now we can finally remove the deployment
# delete the objects in the bucket first
gsutil -m rm -r "gs://${BUCKET_NAME}"
gcloud deployment-manager deployments delete "${DEPLOYMENT_NAME}" --quiet
3.8 - Hetzner
Upload image
Hetzner Cloud does not support uploading custom images. You can email their support to get a Talos ISO uploaded by following issues:3599 or you can prepare image snapshot by yourself.
There are two options to upload your own.
- Run an instance in rescue mode and replace the system OS with the Talos image
- Use Hashicorp packer to prepare an image
Rescue mode
Create a new Server in the Hetzner console.
Enable the Hetzner Rescue System for this server and reboot.
Upon a reboot, the server will boot a special minimal Linux distribution designed for repair and reinstall.
Once running, login to the server using ssh
to prepare the system disk by doing the following:
# Check that you in Rescue mode
df
### Result is like:
# udev 987432 0 987432 0% /dev
# 213.133.99.101:/nfs 308577696 247015616 45817536 85% /root/.oldroot/nfs
# overlay 995672 8340 987332 1% /
# tmpfs 995672 0 995672 0% /dev/shm
# tmpfs 398272 572 397700 1% /run
# tmpfs 5120 0 5120 0% /run/lock
# tmpfs 199132 0 199132 0% /run/user/0
# Download the Talos image
cd /tmp
wget -O /tmp/talos.raw.xz https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0/hcloud-amd64.raw.xz
# Replace system
xz -d -c /tmp/talos.raw.xz | dd of=/dev/sda && sync
# shutdown the instance
shutdown -h now
To make sure disk content is consistent, it is recommended to shut the server down before taking an image (snapshot). Once shutdown, simply create an image (snapshot) from the console. You can now use this snapshot to run Talos on the cloud.
Packer
Install packer to the local machine.
Create a config file for packer to use:
# hcloud.pkr.hcl
packer {
required_plugins {
hcloud = {
source = "github.com/hetznercloud/hcloud"
version = "~> 1"
}
}
}
variable "talos_version" {
type = string
default = "v1.10.0-alpha.0"
}
variable "arch" {
type = string
default = "amd64"
}
variable "server_type" {
type = string
default = "cx22"
}
variable "server_location" {
type = string
default = "hel1"
}
locals {
image = "https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/${var.talos_version}/hcloud-${var.arch}.raw.xz"
}
source "hcloud" "talos" {
rescue = "linux64"
image = "debian-11"
location = "${var.server_location}"
server_type = "${var.server_type}"
ssh_username = "root"
snapshot_name = "talos system disk - ${var.arch} - ${var.talos_version}"
snapshot_labels = {
type = "infra",
os = "talos",
version = "${var.talos_version}",
arch = "${var.arch}",
}
}
build {
sources = ["source.hcloud.talos"]
provisioner "shell" {
inline = [
"apt-get install -y wget",
"wget -O /tmp/talos.raw.xz ${local.image}",
"xz -d -c /tmp/talos.raw.xz | dd of=/dev/sda && sync",
]
}
}
Additionally you could create a file containing
arch = "arm64"
server_type = "cax11"
server_location = "fsn1"
and build the snapshot for arm64.
Create a new image by issuing the commands shown below. Note that to create a new API token for your Project, switch into the Hetzner Cloud Console choose a Project, go to Access → Security, and create a new token.
# First you need set API Token
export HCLOUD_TOKEN=${TOKEN}
# Upload image
packer init .
packer build .
# Save the image ID
export IMAGE_ID=<image-id-in-packer-output>
After doing this, you can find the snapshot in the console interface.
Creating a Cluster via the CLI
This section assumes you have the hcloud console utility on your local machine.
# Set hcloud context and api key
hcloud context create talos-tutorial
Create a Load Balancer
Create a load balancer by issuing the commands shown below. Save the IP/DNS name, as this info will be used in the next step.
hcloud load-balancer create --name controlplane --network-zone eu-central --type lb11 --label 'type=controlplane'
### Result is like:
# LoadBalancer 484487 created
# IPv4: 49.12.X.X
# IPv6: 2a01:4f8:X:X::1
hcloud load-balancer add-service controlplane \
--listen-port 6443 --destination-port 6443 --protocol tcp
hcloud load-balancer add-target controlplane \
--label-selector 'type=controlplane'
Create the Machine Configuration Files
Generating Base Configurations
Using the IP/DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines by issuing:
$ talosctl gen config talos-k8s-hcloud-tutorial https://<load balancer IP or DNS>:6443 \
--with-examples=false --with-docs=false
created controlplane.yaml
created worker.yaml
created talosconfig
Generating the config without examples and docs is necessary because otherwise you can easily exceed the 32 kb limit on uploadable userdata (see issue 8805).
At this point, you can modify the generated configs to your liking.
Optionally, you can specify --config-patch
with RFC6902 jsonpatches which will be applied during the config generation.
Validate the Configuration Files
Validate any edited machine configs with:
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config worker.yaml --mode cloud
worker.yaml is valid for cloud mode
Create the Servers
We can now create our servers.
Note that you can find IMAGE_ID
in the snapshot section of the console: https://console.hetzner.cloud/projects/$PROJECT_ID/servers/snapshots
.
Create the Control Plane Nodes
Create the control plane nodes with:
export IMAGE_ID=<your-image-id>
hcloud server create --name talos-control-plane-1 \
--image ${IMAGE_ID} \
--type cx22 --location hel1 \
--label 'type=controlplane' \
--user-data-from-file controlplane.yaml
hcloud server create --name talos-control-plane-2 \
--image ${IMAGE_ID} \
--type cx22 --location fsn1 \
--label 'type=controlplane' \
--user-data-from-file controlplane.yaml
hcloud server create --name talos-control-plane-3 \
--image ${IMAGE_ID} \
--type cx22 --location nbg1 \
--label 'type=controlplane' \
--user-data-from-file controlplane.yaml
Create the Worker Nodes
Create the worker nodes with the following command, repeating (and incrementing the name counter) as many times as desired.
hcloud server create --name talos-worker-1 \
--image ${IMAGE_ID} \
--type cx22 --location hel1 \
--label 'type=worker' \
--user-data-from-file worker.yaml
Bootstrap Etcd
To configure talosctl
we will need the first control plane node’s IP.
This can be found by issuing:
hcloud server list | grep talos-control-plane
Set the endpoints
and nodes
for your talosconfig with:
talosctl --talosconfig talosconfig config endpoint <control-plane-1-IP>
talosctl --talosconfig talosconfig config node <control-plane-1-IP>
Bootstrap etcd
on the first control plane node with:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
Install Hetzner’s Cloud Controller Manager
First of all, we need to patch the Talos machine configuration used by each node:
# patch.yaml
cluster:
externalCloudProvider:
enabled: true
Then run the following command:
talosctl --talosconfig talosconfig patch machineconfig --patch-file patch.yaml --nodes <comma separated list of all your nodes' IP addresses>
With that in place, we can now follow the official instructions, ignoring the kubeadm
related steps.
3.9 - Kubernetes
Talos Linux can be run as a pod in Kubernetes similar to running Talos in Docker. This can be used e.g. to run controlplane nodes inside an existing Kubernetes cluster.
Talos Linux running in Kubernetes is not full Talos Linux experience, as it is running in a container using the host’s kernel and network stack. Some operations like upgrades and reboots are not supported.
Prerequisites
- a running Kubernetes cluster
- a
talos
container image:ghcr.io/siderolabs/talos:v1.10.0-alpha.0
Machine Configuration
Machine configuration can be generated using Getting Started guide. Machine install disk will ge ignored, as the install image. The Talos version will be driven by the container image being used.
The required machine configuration patch to enable using container runtime DNS:
machine:
features:
hostDNS:
enabled: true
forwardKubeDNSToHost: true
Talos and Kubernetes API can be exposed using Kubernetes services or load balancers, so they can be accessed from outside the cluster.
Running Talos Pods
There might be many ways to run Talos in Kubernetes (StatefulSet, Deployment, single Pod), so we will only provide some basic guidance here.
Container Settings
env:
- name: PLATFORM
value: container
image: ghcr.io/siderolabs/talos:v1.10.0-alpha.0
ports:
- containerPort: 50000
name: talos-api
protocol: TCP
- containerPort: 6443
name: k8s-api
protocol: TCP
securityContext:
privileged: true
readOnlyRootFilesystem: true
seccompProfile:
type: Unconfined
Submitting Initial Machine Configuration
Initial machine configuration can be submitted using talosctl apply-config --insecure
when the pod is running, or it can be submitted
via an environment variable USERDATA
with base64-encoded machine configuration.
Volume Mounts
Three ephemeral mounts are required for /run
, /system
, and /tmp
directories:
volumeMounts:
- mountPath: /run
name: run
- mountPath: /system
name: system
- mountPath: /tmp
name: tmp
volumes:
- emptyDir: {}
name: run
- emptyDir: {}
name: system
- emptyDir: {}
name: tmp
Several other mountpoints are required, and they should persist across pod restarts, so one should use PersistentVolume
for them:
volumeMounts:
- mountPath: /system/state
name: system-state
- mountPath: /var
name: var
- mountPath: /etc/cni
name: etc-cni
- mountPath: /etc/kubernetes
name: etc-kubernetes
- mountPath: /usr/libexec/kubernetes
name: usr-libexec-kubernetes
3.10 - Nocloud
nocloud
specification.Talos supports nocloud data source implementation.
On bare-metal, Talos Linux was tested to correctly parse nocloud
configuration from the following providers:
There are two ways to configure Talos server with nocloud
platform:
- via SMBIOS “serial number” option
- using CDROM or USB-flash filesystem
Note: This requires the nocloud image which can be downloaded from the Image Factory.
SMBIOS Serial Number
This method requires the network connection to be up (e.g. via DHCP). Configuration is delivered from the HTTP server.
ds=nocloud-net;s=http://10.10.0.1/configs/;h=HOSTNAME
After the network initialization is complete, Talos fetches:
- the machine config from
http://10.10.0.1/configs/user-data
- the network config (if available) from
http://10.10.0.1/configs/network-config
SMBIOS: QEMU
Add the following flag to qemu
command line when starting a VM:
qemu-system-x86_64 \
...\
-smbios type=1,serial=ds=nocloud-net;s=http://10.10.0.1/configs/
SMBIOS: Proxmox
Set the source machine config through the serial number on Proxmox GUI.
You can read the VM config from a root
shell with the command qm config $ID
($ID
- VM ID number of virtual machine), you will see something like:
# qm config $ID
...
smbios1: uuid=5b0f7dcf-cfe3-4bf3-87a2-1cad29bd51f9,serial=ZHM9bm9jbG91ZC1uZXQ7cz1odHRwOi8vMTAuMTAuMC4xL2NvbmZpZ3Mv,base64=1
...
Where serial holds the base64-encoded string version of ds=nocloud-net;s=http://10.10.0.1/configs/
.
The serial can also be set from a root
shell on the Proxmox server:
# qm set $VM --smbios1 "uuid=5b0f7dcf-cfe3-4bf3-87a2-1cad29bd51f9,serial=$(printf '%s' 'ds=nocloud-net;s=http://10.10.0.1/configs/' | base64),base64=1"
update VM 105: -smbios1 uuid=5b0f7dcf-cfe3-4bf3-87a2-1cad29bd51f9,serial=ZHM9bm9jbG91ZC1uZXQ7cz1odHRwOi8vMTAuMTAuMC4xL2NvbmZpZ3Mv,base64=1
Keep in mind that if you set the serial from the command line, you must encode it as base64, and you must include the UUID and any other settings that are already set for the smbios1
option or they will be removed.
CDROM/USB
Talos can also get machine config from local attached storage without any prior network connection being established.
You can provide configs to the server via files on a VFAT or ISO9660 filesystem.
The filesystem volume label must be cidata
or CIDATA
.
Example: QEMU
Create and prepare Talos machine config:
export CONTROL_PLANE_IP=192.168.1.10
talosctl gen config talos-nocloud https://$CONTROL_PLANE_IP:6443 --output-dir _out
Prepare cloud-init configs:
mkdir -p iso
mv _out/controlplane.yaml iso/user-data
echo "local-hostname: controlplane-1" > iso/meta-data
cat > iso/network-config << EOF
version: 1
config:
- type: physical
name: eth0
mac_address: "52:54:00:12:34:00"
subnets:
- type: static
address: 192.168.1.10
netmask: 255.255.255.0
gateway: 192.168.1.254
EOF
Create cloud-init iso image
cd iso && genisoimage -output cidata.iso -V cidata -r -J user-data meta-data network-config
Start the VM
qemu-system-x86_64 \
...
-cdrom iso/cidata.iso \
...
Example: Proxmox
Proxmox can create cloud-init disk for you. Edit the cloud-init config information in Proxmox as follows, substitute your own information as necessary:
and then add a cicustom
param to the virtual machine’s configuration from a root
shell:
# qm set 100 --cicustom user=local:snippets/controlplane-1.yml
update VM 100: -cicustom user=local:snippets/controlplane-1.yml
Note:
snippets/controlplane-1.yml
is Talos machine config. It is usually located at/var/lib/vz/snippets/controlplane-1.yml
. This file must be placed to this path manually, as Proxmox does not support snippet uploading via API/GUI.
Click on Regenerate Image
button after the above changes are made.
3.11 - OpenStack
Creating a Cluster via the CLI
In this guide, we will create an HA Kubernetes cluster in OpenStack with 1 worker node. We will assume an existing some familiarity with OpenStack. If you need more information on OpenStack specifics, please see the official OpenStack documentation.
Environment Setup
You should have an existing openrc file. This file will provide environment variables necessary to talk to your OpenStack cloud. See here for instructions on fetching this file.
Create the Image
First, download the OpenStack image from Image Factory.
These images are called openstack-$ARCH.tar.gz
.
Untar this file with tar -xvf openstack-$ARCH.tar.gz
.
The resulting file will be called disk.raw
.
Upload the Image
Once you have the image, you can upload to OpenStack with:
openstack image create --public --disk-format raw --file disk.raw talos
Network Infrastructure
Load Balancer and Network Ports
Once the image is prepared, you will need to work through setting up the network. Issue the following to create a load balancer, the necessary network ports for each control plane node, and associations between the two.
Creating loadbalancer:
# Create load balancer, updating vip-subnet-id if necessary
openstack loadbalancer create --name talos-control-plane --vip-subnet-id public
# Create listener
openstack loadbalancer listener create --name talos-control-plane-listener --protocol TCP --protocol-port 6443 talos-control-plane
# Pool and health monitoring
openstack loadbalancer pool create --name talos-control-plane-pool --lb-algorithm ROUND_ROBIN --listener talos-control-plane-listener --protocol TCP
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP talos-control-plane-pool
Creating ports:
# Create ports for control plane nodes, updating network name if necessary
openstack port create --network shared talos-control-plane-1
openstack port create --network shared talos-control-plane-2
openstack port create --network shared talos-control-plane-3
# Create floating IPs for the ports, so that you will have talosctl connectivity to each control plane
openstack floating ip create --port talos-control-plane-1 public
openstack floating ip create --port talos-control-plane-2 public
openstack floating ip create --port talos-control-plane-3 public
Note: Take notice of the private and public IPs associated with each of these ports, as they will be used in the next step. Additionally, take node of the port ID, as it will be used in server creation.
Associate port’s private IPs to loadbalancer:
# Create members for each port IP, updating subnet-id and address as necessary.
openstack loadbalancer member create --subnet-id shared-subnet --address <PRIVATE IP OF talos-control-plane-1 PORT> --protocol-port 6443 talos-control-plane-pool
openstack loadbalancer member create --subnet-id shared-subnet --address <PRIVATE IP OF talos-control-plane-2 PORT> --protocol-port 6443 talos-control-plane-pool
openstack loadbalancer member create --subnet-id shared-subnet --address <PRIVATE IP OF talos-control-plane-3 PORT> --protocol-port 6443 talos-control-plane-pool
Security Groups
This example uses the default security group in OpenStack. Ports have been opened to ensure that connectivity from both inside and outside the group is possible. You will want to allow, at a minimum, ports 6443 (Kubernetes API server) and 50000 (Talos API) from external sources. It is also recommended to allow communication over all ports from within the subnet.
Cluster Configuration
With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.
LB_PUBLIC_IP=$(openstack loadbalancer show talos-control-plane -f json | jq -r .vip_address)
talosctl gen config talos-k8s-openstack-tutorial https://${LB_PUBLIC_IP}:6443
Additionally, you can specify --config-patch
with RFC6902 jsonpatch which will be applied during the config generation.
Compute Creation
We are now ready to create our OpenStack nodes.
Create control plane:
# Create control planes 2 and 3, substituting the same info.
for i in $( seq 1 3 ); do
openstack server create talos-control-plane-$i --flavor m1.small --nic port-id=talos-control-plane-$i --image talos --user-data /path/to/controlplane.yaml
done
Create worker:
# Update network name as necessary.
openstack server create talos-worker-1 --flavor m1.small --network shared --image talos --user-data /path/to/worker.yaml
Note: This step can be repeated to add more workers.
Bootstrap Etcd
You should now be able to interact with your cluster with talosctl
.
We will use one of the floating IPs we allocated earlier.
It does not matter which one.
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
3.12 - Oracle
Upload image
Oracle Cloud at the moment does not have a Talos official image. So you can use Bring Your Own Image (BYOI) approach.
Prepare an image for upload:
Generate an image using Image Factory.
Download the disk image artifact (e.g: https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0/oracle-arm64.raw.xz)
Define the image metadata file called
image_metadata.json
. Example for anarm64
deployment:{ "version": 2, "externalLaunchOptions": { "firmware": "UEFI_64", "networkType": "PARAVIRTUALIZED", "bootVolumeType": "PARAVIRTUALIZED", "remoteDataVolumeType": "PARAVIRTUALIZED", "localDataVolumeType": "PARAVIRTUALIZED", "launchOptionsSource": "PARAVIRTUALIZED", "pvAttachmentVersion": 2, "pvEncryptionInTransitEnabled": true, "consistentVolumeNamingEnabled": true }, "imageCapabilityData": null, "imageCapsFormatVersion": null, "operatingSystem": "Talos", "operatingSystemVersion": "1.7.6", "additionalMetadata": { "shapeCompatibilities": [ { "internalShapeName": "VM.Standard.A1.Flex", "ocpuConstraints": null, "memoryConstraints": null } ] } }
Extract the xz or zst archive:
xz --decompress ./oracle-arm64.raw.xz # or zstd --decompress ./oracle-arm64.raw.zst
Convert the image to a
qcow2
format (using qemu):qemu-img convert -f raw -O qcow2 oracle-arm64.raw oracle-arm64.qcow2
Create an archive containing the image and metadata called
talos-oracle-arm64.oci
:tar zcf oracle-arm64.oci oracle-arm64.qcow2 image_metadata.json
Upload the image to a storage bucket.
Create an image, using the new URL format for the storage bucket object.
Note: file names depends on configuration of deployment such as architecture, adjust accordingly.
Talos config
OracleCloud has highly available NTP service, it can be enabled in Talos machine config with:
machine:
time:
servers:
- 169.254.169.254
Creating a Cluster via the CLI
Login to the console. And open the Cloud Shell.
Create a network
export cidr_block=10.0.0.0/16
export subnet_block=10.0.0.0/24
export compartment_id=<substitute-value-of-compartment_id> # https://docs.cloud.oracle.com/en-us/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/network/vcn/create.html#cmdoption-compartment-id
export vcn_id=$(oci network vcn create --cidr-block $cidr_block --display-name talos-example --compartment-id $compartment_id --query data.id --raw-output)
export rt_id=$(oci network subnet create --cidr-block $subnet_block --display-name kubernetes --compartment-id $compartment_id --vcn-id $vcn_id --query data.route-table-id --raw-output)
export ig_id=$(oci network internet-gateway create --compartment-id $compartment_id --is-enabled true --vcn-id $vcn_id --query data.id --raw-output)
oci network route-table update --rt-id $rt_id --route-rules "[{\"cidrBlock\":\"0.0.0.0/0\",\"networkEntityId\":\"$ig_id\"}]" --force
# disable firewall
export sl_id=$(oci network vcn list --compartment-id $compartment_id --query 'data[0]."default-security-list-id"' --raw-output)
oci network security-list update --security-list-id $sl_id --egress-security-rules '[{"destination": "0.0.0.0/0", "protocol": "all", "isStateless": false}]' --ingress-security-rules '[{"source": "0.0.0.0/0", "protocol": "all", "isStateless": false}]' --force
Create a Load Balancer
Create a load balancer by issuing the commands shown below. Save the IP/DNS name, as this info will be used in the next step.
export subnet_id=$(oci network subnet list --compartment-id=$compartment_id --display-name kubernetes --query data[0].id --raw-output)
export network_load_balancer_id=$(oci nlb network-load-balancer create --compartment-id $compartment_id --display-name controlplane-lb --subnet-id $subnet_id --is-preserve-source-destination false --is-private false --query data.id --raw-output)
cat <<EOF > talos-health-checker.json
{
"intervalInMillis": 10000,
"port": 50000,
"protocol": "TCP"
}
EOF
oci nlb backend-set create --health-checker file://talos-health-checker.json --name talos --network-load-balancer-id $network_load_balancer_id --policy TWO_TUPLE --is-preserve-source false
oci nlb listener create --default-backend-set-name talos --name talos --network-load-balancer-id $network_load_balancer_id --port 50000 --protocol TCP
cat <<EOF > controlplane-health-checker.json
{
"intervalInMillis": 10000,
"port": 6443,
"protocol": "HTTPS",
"returnCode": 401,
"urlPath": "/readyz"
}
EOF
oci nlb backend-set create --health-checker file://controlplane-health-checker.json --name controlplane --network-load-balancer-id $network_load_balancer_id --policy TWO_TUPLE --is-preserve-source false
oci nlb listener create --default-backend-set-name controlplane --name controlplane --network-load-balancer-id $network_load_balancer_id --port 6443 --protocol TCP
# Save the external IP
oci nlb network-load-balancer list --compartment-id $compartment_id --display-name controlplane-lb --query 'data.items[0]."ip-addresses"'
Create the Machine Configuration Files
Generating Base Configurations
Using the IP/DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines by issuing:
$ talosctl gen config talos-k8s-oracle-tutorial https://<load balancer IP or DNS>:6443 --additional-sans <load balancer IP or DNS>
created controlplane.yaml
created worker.yaml
created talosconfig
At this point, you can modify the generated configs to your liking.
Optionally, you can specify --config-patch
with RFC6902 jsonpatches which will be applied during the config generation.
Validate the Configuration Files
Validate any edited machine configs with:
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config worker.yaml --mode cloud
worker.yaml is valid for cloud mode
Create the Servers
Create the Control Plane Nodes
Create the control plane nodes with:
export shape='VM.Standard.A1.Flex'
export subnet_id=$(oci network subnet list --compartment-id=$compartment_id --display-name kubernetes --query data[0].id --raw-output)
export image_id=$(oci compute image list --compartment-id $compartment_id --shape $shape --operating-system Talos --limit 1 --query data[0].id --raw-output)
export availability_domain=$(oci iam availability-domain list --compartment-id=$compartment_id --query data[0].name --raw-output)
export network_load_balancer_id=$(oci nlb network-load-balancer list --compartment-id $compartment_id --display-name controlplane-lb --query 'data.items[0].id' --raw-output)
cat <<EOF > shape.json
{
"memoryInGBs": 4,
"ocpus": 1
}
EOF
export instance_id=$(oci compute instance launch --shape $shape --shape-config file://shape.json --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name controlplane-1 --private-ip 10.0.0.11 --assign-public-ip true --launch-options '{"networkType":"PARAVIRTUALIZED"}' --user-data-file controlplane.yaml --query 'data.id' --raw-output)
oci nlb backend create --backend-set-name talos --network-load-balancer-id $network_load_balancer_id --port 50000 --target-id $instance_id
oci nlb backend create --backend-set-name controlplane --network-load-balancer-id $network_load_balancer_id --port 6443 --target-id $instance_id
export instance_id=$(oci compute instance launch --shape $shape --shape-config file://shape.json --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name controlplane-2 --private-ip 10.0.0.12 --assign-public-ip true --launch-options '{"networkType":"PARAVIRTUALIZED"}' --user-data-file controlplane.yaml --query 'data.id' --raw-output)
oci nlb backend create --backend-set-name talos --network-load-balancer-id $network_load_balancer_id --port 50000 --target-id $instance_id
oci nlb backend create --backend-set-name controlplane --network-load-balancer-id $network_load_balancer_id --port 6443 --target-id $instance_id
export instance_id=$(oci compute instance launch --shape $shape --shape-config file://shape.json --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name controlplane-3 --private-ip 10.0.0.13 --assign-public-ip true --launch-options '{"networkType":"PARAVIRTUALIZED"}' --user-data-file controlplane.yaml --query 'data.id' --raw-output)
oci nlb backend create --backend-set-name talos --network-load-balancer-id $network_load_balancer_id --port 50000 --target-id $instance_id
oci nlb backend create --backend-set-name controlplane --network-load-balancer-id $network_load_balancer_id --port 6443 --target-id $instance_id
Create the Worker Nodes
Create the worker nodes with the following command, repeating (and incrementing the name counter) as many times as desired.
export subnet_id=$(oci network subnet list --compartment-id=$compartment_id --display-name kubernetes --query data[0].id --raw-output)
export image_id=$(oci compute image list --compartment-id $compartment_id --operating-system Talos --limit 1 --query data[0].id --raw-output)
export availability_domain=$(oci iam availability-domain list --compartment-id=$compartment_id --query data[0].name --raw-output)
export shape='VM.Standard.E2.1.Micro'
oci compute instance launch --shape $shape --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name worker-1 --assign-public-ip true --user-data-file worker.yaml
oci compute instance launch --shape $shape --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name worker-2 --assign-public-ip true --user-data-file worker.yaml
oci compute instance launch --shape $shape --availability-domain $availability_domain --compartment-id $compartment_id --image-id $image_id --subnet-id $subnet_id --display-name worker-3 --assign-public-ip true --user-data-file worker.yaml
Bootstrap Etcd
To configure talosctl
we will need the first control plane node’s IP.
This can be found by issuing:
export instance_id=$(oci compute instance list --compartment-id $compartment_id --display-name controlplane-1 --query 'data[0].id' --raw-output)
oci compute instance list-vnics --instance-id $instance_id --query 'data[0]."private-ip"' --raw-output
Set the endpoints
and nodes
for your talosconfig with:
talosctl --talosconfig talosconfig config endpoint <load balancer IP or DNS>
talosctl --talosconfig talosconfig config node <control-plane-1-IP>
Bootstrap etcd
on the first control plane node with:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
3.13 - Scaleway
Talos is known to work on scaleway.com; however, it is currently undocumented.
3.14 - UpCloud
In this guide we will create an HA Kubernetes cluster 3 control plane nodes and 1 worker node. We assume some familiarity with UpCloud. If you need more information on UpCloud specifics, please see the official UpCloud documentation.
Create the Image
The best way to create an image for UpCloud, is to build one using
Hashicorp packer, with the
upcloud-amd64.raw.xz
image available from the Image Factory.
Using the general ISO is also possible, but the UpCloud image has some UpCloud
specific features implemented, such as the fetching of metadata and user data to configure the nodes.
To create the cluster, you need a few things locally installed:
NOTE: Make sure your account allows API connections. To do so, log into UpCloud control panel and go to People -> Account -> Permissions -> Allow API connections checkbox. It is recommended to create a separate subaccount for your API access and only set the API permission.
To use the UpCloud CLI, you need to create a config in $HOME/.config/upctl.yaml
username: your_upcloud_username
password: your_upcloud_password
To use the UpCloud packer plugin, you need to also export these credentials to your
environment variables, by e.g. putting the following in your .bashrc
or .zshrc
export UPCLOUD_USERNAME="<username>"
export UPCLOUD_PASSWORD="<password>"
Next create a config file for packer to use:
# upcloud.pkr.hcl
packer {
required_plugins {
upcloud = {
version = ">=v1.0.0"
source = "github.com/UpCloudLtd/upcloud"
}
}
}
variable "talos_version" {
type = string
default = "v1.10.0-alpha.0"
}
locals {
image = "https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/${var.talos_version}/upcloud-amd64.raw.xz"
}
variable "username" {
type = string
description = "UpCloud API username"
default = "${env("UPCLOUD_USERNAME")}"
}
variable "password" {
type = string
description = "UpCloud API password"
default = "${env("UPCLOUD_PASSWORD")}"
sensitive = true
}
source "upcloud" "talos" {
username = "${var.username}"
password = "${var.password}"
zone = "us-nyc1"
storage_name = "Debian GNU/Linux 11 (Bullseye)"
template_name = "Talos (${var.talos_version})"
}
build {
sources = ["source.upcloud.talos"]
provisioner "shell" {
inline = [
"apt-get install -y wget xz-utils",
"wget -q -O /tmp/talos.raw.xz ${local.image}",
"xz -d -c /tmp/talos.raw.xz | dd of=/dev/vda",
]
}
provisioner "shell-local" {
inline = [
"upctl server stop --type hard custom",
]
}
}
Now create a new image by issuing the commands shown below.
packer init .
packer build .
After doing this, you can find the custom image in the console interface under storage.
Creating a Cluster via the CLI
Create an Endpoint
To communicate with the Talos cluster you will need a single endpoint that is used to access the cluster. This can either be a loadbalancer that will sit in front of all your control plane nodes, a DNS name with one or more A or AAAA records pointing to the control plane nodes, or directly the IP of a control plane node.
Which option is best for you will depend on your needs. Endpoint selection has been further documented here.
After you decide on which endpoint to use, note down the domain name or IP, as we will need it in the next step.
Create the Machine Configuration Files
Generating Base Configurations
Using the DNS name of the endpoint created earlier, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-upcloud-tutorial https://<load balancer IP or DNS>:<port> --install-disk /dev/vda
created controlplane.yaml
created worker.yaml
created talosconfig
At this point, you can modify the generated configs to your liking. Depending on the Kubernetes version you want to run, you might need to select a different Talos version, as not all versions are compatible. You can find the support matrix here.
Optionally, you can specify --config-patch
with RFC6902 jsonpatch or yamlpatch
which will be applied during the config generation.
Validate the Configuration Files
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config worker.yaml --mode cloud
worker.yaml is valid for cloud mode
Create the Servers
Create the Control Plane Nodes
Run the following to create three total control plane nodes:
for ID in $(seq 3); do
upctl server create \
--zone us-nyc1 \
--title talos-us-nyc1-master-$ID \
--hostname talos-us-nyc1-master-$ID \
--plan 2xCPU-4GB \
--os "Talos (v1.10.0-alpha.0)" \
--user-data "$(cat controlplane.yaml)" \
--enable-metada
done
Note: modify the zone and OS depending on your preferences. The OS should match the template name generated with packer in the previous step.
Note the IP address of the first control plane node, as we will need it later.
Create the Worker Nodes
Run the following to create a worker node:
upctl server create \
--zone us-nyc1 \
--title talos-us-nyc1-worker-1 \
--hostname talos-us-nyc1-worker-1 \
--plan 2xCPU-4GB \
--os "Talos (v1.10.0-alpha.0)" \
--user-data "$(cat worker.yaml)" \
--enable-metada
Bootstrap Etcd
To configure talosctl
we will need the first control plane node’s IP, as noted earlier.
We only add one node IP, as that is the entry into our cluster against which our commands will be run.
All requests to other nodes are proxied through the endpoint, and therefore not
all nodes need to be manually added to the config.
You don’t want to run your commands against all nodes, as this can destroy your
cluster if you are not careful (further documentation).
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig
It will take a few minutes before Kubernetes has been fully bootstrapped, and is accessible.
You can check if the nodes are registered in Talos by running
talosctl --talosconfig talosconfig get members
To check if your nodes are ready, run
kubectl get nodes
3.15 - Vultr
Creating a Cluster using the Vultr CLI
This guide will demonstrate how to create a highly-available Kubernetes cluster with one worker using the Vultr cloud provider.
Vultr have a very well documented REST API, and an open-source CLI tool to interact with the API which will be used in this guide.
Make sure to follow installation and authentication instructions for the vultr-cli
tool.
Boot Options
Upload an ISO Image
First step is to make the Talos ISO available to Vultr by uploading the latest release of the ISO to the Vultr ISO server.
vultr-cli iso create --url https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0vultr-amd64.iso
Make a note of the ID
in the output, it will be needed later when creating the instances.met
PXE Booting via Image Factory
Talos Linux can be PXE-booted on Vultr using Image Factory, using the vultr
platform: e.g.
https://pxe.factory.talos.dev/pxe/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0/vultr-amd64
(this URL references the default schematic and amd64
architecture).
Make a note of the ID
in the output, it will be needed later when creating the instances.
Create a Load Balancer
A load balancer is needed to serve as the Kubernetes endpoint for the cluster.
vultr-cli load-balancer create \
--region $REGION \
--label "Talos Kubernetes Endpoint" \
--port 6443 \
--protocol tcp \
--check-interval 10 \
--response-timeout 5 \
--healthy-threshold 5 \
--unhealthy-threshold 3 \
--forwarding-rules frontend_protocol:tcp,frontend_port:443,backend_protocol:tcp,backend_port:6443
Make a note of the ID
of the load balancer from the output of the above command, it will be needed after the control plane instances are created.
vultr-cli load-balancer get $LOAD_BALANCER_ID | grep ^IP
Make a note of the IP
address, it will be needed later when generating the configuration.
Create the Machine Configuration
Generate Base Configuration
Using the IP address (or DNS name if one was created) of the load balancer created above, generate the machine configuration files for the new cluster.
talosctl gen config talos-kubernetes-vultr https://$LOAD_BALANCER_ADDRESS
Once generated, the machine configuration can be modified as necessary for the new cluster, for instance updating disk installation, or adding SANs for the certificates.
Validate the Configuration Files
talosctl validate --config controlplane.yaml --mode cloud
talosctl validate --config worker.yaml --mode cloud
Create the Nodes
Create the Control Plane Nodes
First a control plane needs to be created, with the example below creating 3 instances in a loop.
The instance type (noted by the --plan vc2-2c-4gb
argument) in the example is for a minimum-spec control plane node, and should be updated to suit the cluster being created.
for id in $(seq 3); do
vultr-cli instance create \
--plan vc2-2c-4gb \
--region $REGION \
--iso $TALOS_ISO_ID \
--host talos-k8s-cp${id} \
--label "Talos Kubernetes Control Plane" \
--tags talos,kubernetes,control-plane
done
Make a note of the instance ID
s, as they are needed to attach to the load balancer created earlier.
vultr-cli load-balancer update $LOAD_BALANCER_ID --instances $CONTROL_PLANE_1_ID,$CONTROL_PLANE_2_ID,$CONTROL_PLANE_3_ID
Once the nodes are booted and waiting in maintenance mode, the machine configuration can be applied to each one in turn.
talosctl --talosconfig talosconfig apply-config --insecure --nodes $CONTROL_PLANE_1_ADDRESS --file controlplane.yaml
talosctl --talosconfig talosconfig apply-config --insecure --nodes $CONTROL_PLANE_2_ADDRESS --file controlplane.yaml
talosctl --talosconfig talosconfig apply-config --insecure --nodes $CONTROL_PLANE_3_ADDRESS --file controlplane.yaml
Create the Worker Nodes
Now worker nodes can be created and configured in a similar way to the control plane nodes, the difference being mainly in the machine configuration file.
Note that like with the control plane nodes, the instance type (here set by --plan vc2-1-1gb
) should be changed for the actual cluster requirements.
for id in $(seq 1); do
vultr-cli instance create \
--plan vc2-1c-1gb \
--region $REGION \
--iso $TALOS_ISO_ID \
--host talos-k8s-worker${id} \
--label "Talos Kubernetes Worker" \
--tags talos,kubernetes,worker
done
Once the worker is booted and in maintenance mode, the machine configuration can be applied in the following manner.
talosctl --talosconfig talosconfig apply-config --insecure --nodes $WORKER_1_ADDRESS --file worker.yaml
Bootstrap etcd
Once all the cluster nodes are correctly configured, the cluster can be bootstrapped to become functional.
It is important that the talosctl bootstrap
command be executed only once and against only a single control plane node.
talosctl --talosconfig talosconfig bootstrap --endpoints $CONTROL_PLANE_1_ADDRESS --nodes $CONTROL_PLANE_1_ADDRESS
Configure Endpoints and Nodes
While the cluster goes through the bootstrapping process and beings to self-manage, the talosconfig
can be updated with the endpoints and nodes.
talosctl --talosconfig talosconfig config endpoints $CONTROL_PLANE_1_ADDRESS $CONTROL_PLANE_2_ADDRESS $CONTROL_PLANE_3_ADDRESS
talosctl --talosconfig talosconfig config nodes $CONTROL_PLANE_1_ADDRESS $CONTROL_PLANE_2_ADDRESS $CONTROL_PLANE_3_ADDRESS WORKER_1_ADDRESS
Retrieve the kubeconfig
Finally, with the cluster fully running, the administrative kubeconfig
can be retrieved from the Talos API to be saved locally.
talosctl --talosconfig talosconfig kubeconfig .
Now the kubeconfig
can be used by any of the usual Kubernetes tools to interact with the Talos-based Kubernetes cluster as normal.
4 - Local Platforms
4.1 - Docker
In this guide we will create a Kubernetes cluster in Docker, using a containerized version of Talos.
Running Talos in Docker is intended to be used in CI pipelines, and local testing when you need a quick and easy cluster. Furthermore, if you are running Talos in production, it provides an excellent way for developers to develop against the same version of Talos.
Requirements
The follow are requirements for running Talos in Docker:
- Docker 18.03 or greater
- a recent version of
talosctl
Note
If you are using Docker Desktop on a macOS computer, and you encounter the error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? you may need to manually create the link for the Docker socket:sudo ln -s "$HOME/.docker/run/docker.sock" /var/run/docker.sock
Caveats
Due to the fact that Talos will be running in a container, certain APIs are not available.
For example upgrade
, reset
, and similar APIs don’t apply in container mode.
Further, when running on a Mac in docker, due to networking limitations, VIPs are not supported.
Create the Cluster
Creating a local cluster is as simple as:
talosctl cluster create
Once the above finishes successfully, your talosconfig
(~/.talos/config
) and kubeconfig
(~/.kube/config
) will be configured to point to the new cluster.
Note: Startup times can take up to a minute or more before the cluster is available.
Finally, we just need to specify which nodes you want to communicate with using talosctl
.
Talosctl can operate on one or all the nodes in the cluster – this makes cluster wide commands much easier.
talosctl config nodes 10.5.0.2 10.5.0.3
Talos and Kubernetes API are mapped to a random port on the host machine, the retrieved talosconfig
and kubeconfig
are configured automatically to point to the new cluster.
Talos API endpoint can be found using talosctl config info
:
$ talosctl config info
...
Endpoints: 127.0.0.1:38423
Kubernetes API endpoint is available with talosctl cluster show
:
$ talosctl cluster show
...
KUBERNETES ENDPOINT https://127.0.0.1:43083
Using the Cluster
Once the cluster is available, you can make use of talosctl
and kubectl
to interact with the cluster.
For example, to view current running containers, run talosctl containers
for a list of containers in the system
namespace, or talosctl containers -k
for the k8s.io
namespace.
To view the logs of a container, use talosctl logs <container>
or talosctl logs -k <container>
.
Cleaning Up
To cleanup, run:
talosctl cluster destroy
Multiple Clusters
Multiple Talos Linux cluster can be created on the same host, each cluster will need to have:
- a unique name (default is
talos-default
) - a unique network CIDR (default is
10.5.0.0/24
)
To create a new cluster, run:
talosctl cluster create --name cluster2 --cidr 10.6.0.0/24
To destroy a specific cluster, run:
talosctl cluster destroy --name cluster2
To switch between clusters, use --context
flag:
talosctl --context cluster2 version
kubectl --context admin@cluster2 get nodes
Running Talos in Docker Manually
To run Talos in a container manually, run:
docker run --rm -it \
--name tutorial \
--hostname talos-cp \
--read-only \
--privileged \
--security-opt seccomp=unconfined \
--mount type=tmpfs,destination=/run \
--mount type=tmpfs,destination=/system \
--mount type=tmpfs,destination=/tmp \
--mount type=volume,destination=/system/state \
--mount type=volume,destination=/var \
--mount type=volume,destination=/etc/cni \
--mount type=volume,destination=/etc/kubernetes \
--mount type=volume,destination=/usr/libexec/kubernetes \
--mount type=volume,destination=/opt \
-e PLATFORM=container \
ghcr.io/siderolabs/talos:v1.10.0-alpha.0
The machine configuration submitted to the container should have a host DNS feature enabled with forwardKubeDNSToHost
enabled.
It is used to forward DNS requests to the resolver provided by Docker (or other container runtime).
4.2 - QEMU
In this guide we will create a Kubernetes cluster using QEMU.
Video Walkthrough
To see a live demo of this writeup, see the video below:
Requirements
- Linux
- a kernel with
- KVM enabled (
/dev/kvm
must exist) CONFIG_NET_SCH_NETEM
enabledCONFIG_NET_SCH_INGRESS
enabled
- KVM enabled (
- at least
CAP_SYS_ADMIN
andCAP_NET_ADMIN
capabilities - QEMU
bridge
,static
andfirewall
CNI plugins from the standard CNI plugins, andtc-redirect-tap
CNI plugin from the awslabs tc-redirect-tap installed to/opt/cni/bin
(installed automatically bytalosctl
)- iptables
/var/run/netns
directory should exist
Installation
How to get QEMU
Install QEMU with your operating system package manager. For example, on Ubuntu for x86:
apt install qemu-system-x86 qemu-kvm
Install talosctl
You can download talosctl
an MacOS and Linux via:
brew install siderolabs/tap/talosctl
For manually installation and other platform please see the talosctl installation guide.
Install Talos kernel and initramfs
QEMU provisioner depends on Talos kernel (vmlinuz
) and initramfs (initramfs.xz
).
These files can be downloaded from the Talos release:
mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/<version>/vmlinuz-<arch> -L -o _out/vmlinuz-<arch>
curl https://github.com/siderolabs/talos/releases/download/<version>/initramfs-<arch>.xz -L -o _out/initramfs-<arch>.xz
For example version v1.10.0-alpha.0
:
curl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/vmlinuz-amd64 -L -o _out/vmlinuz-amd64
curl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/initramfs-amd64.xz -L -o _out/initramfs-amd64.xz
Create the Cluster
For the first time, create root state directory as your user so that you can inspect the logs as non-root user:
mkdir -p ~/.talos/clusters
Create the cluster:
sudo --preserve-env=HOME talosctl cluster create --provisioner qemu
Before the first cluster is created, talosctl
will download the CNI bundle for the VM provisioning and install it to ~/.talos/cni
directory.
Once the above finishes successfully, your talosconfig (~/.talos/config
) will be configured to point to the new cluster, and kubeconfig
will be
downloaded and merged into default kubectl config location (~/.kube/config
).
Cluster provisioning process can be optimized with registry pull-through caches.
Using the Cluster
Once the cluster is available, you can make use of talosctl
and kubectl
to interact with the cluster.
For example, to view current running containers, run talosctl -n 10.5.0.2 containers
for a list of containers in the system
namespace, or talosctl -n 10.5.0.2 containers -k
for the k8s.io
namespace.
To view the logs of a container, use talosctl -n 10.5.0.2 logs <container>
or talosctl -n 10.5.0.2 logs -k <container>
.
A bridge interface will be created, and assigned the default IP 10.5.0.1. Each node will be directly accessible on the subnet specified at cluster creation time. A loadbalancer runs on 10.5.0.1 by default, which handles loadbalancing for the Kubernetes APIs.
You can see a summary of the cluster state by running:
$ talosctl cluster show --provisioner qemu
PROVISIONER qemu
NAME talos-default
NETWORK NAME talos-default
NETWORK CIDR 10.5.0.0/24
NETWORK GATEWAY 10.5.0.1
NETWORK MTU 1500
NODES:
NAME TYPE IP CPU RAM DISK
talos-default-controlplane-1 ControlPlane 10.5.0.2 1.00 1.6 GB 4.3 GB
talos-default-controlplane-2 ControlPlane 10.5.0.3 1.00 1.6 GB 4.3 GB
talos-default-controlplane-3 ControlPlane 10.5.0.4 1.00 1.6 GB 4.3 GB
talos-default-worker-1 Worker 10.5.0.5 1.00 1.6 GB 4.3 GB
Cleaning Up
To cleanup, run:
sudo --preserve-env=HOME talosctl cluster destroy --provisioner qemu
Note: In that case that the host machine is rebooted before destroying the cluster, you may need to manually remove
~/.talos/clusters/talos-default
.
Manual Clean Up
The talosctl cluster destroy
command depends heavily on the clusters state directory.
It contains all related information of the cluster.
The PIDs and network associated with the cluster nodes.
If you happened to have deleted the state folder by mistake or you would like to cleanup the environment, here are the steps how to do it manually:
Remove VM Launchers
Find the process of talosctl qemu-launch
:
ps -elf | grep 'talosctl qemu-launch'
To remove the VMs manually, execute:
sudo kill -s SIGTERM <PID>
Example output, where VMs are running with PIDs 157615 and 157617
ps -elf | grep '[t]alosctl qemu-launch'
0 S root 157615 2835 0 80 0 - 184934 - 07:53 ? 00:00:00 talosctl qemu-launch
0 S root 157617 2835 0 80 0 - 185062 - 07:53 ? 00:00:00 talosctl qemu-launch
sudo kill -s SIGTERM 157615
sudo kill -s SIGTERM 157617
Stopping VMs
Find the process of qemu-system
:
ps -elf | grep 'qemu-system'
To stop the VMs manually, execute:
sudo kill -s SIGTERM <PID>
Example output, where VMs are running with PIDs 158065 and 158216
ps -elf | grep qemu-system
2 S root 1061663 1061168 26 80 0 - 1786238 - 14:05 ? 01:53:56 qemu-system-x86_64 -m 2048 -drive format=raw,if=virtio,file=/home/username/.talos/clusters/talos-default/bootstrap-master.disk -smp cpus=2 -cpu max -nographic -netdev tap,id=net0,ifname=tap0,script=no,downscript=no -device virtio-net-pci,netdev=net0,mac=1e:86:c6:b4:7c:c4 -device virtio-rng-pci -no-reboot -boot order=cn,reboot-timeout=5000 -smbios type=1,uuid=7ec0a73c-826e-4eeb-afd1-39ff9f9160ca -machine q35,accel=kvm
2 S root 1061663 1061170 67 80 0 - 621014 - 21:23 ? 00:00:07 qemu-system-x86_64 -m 2048 -drive format=raw,if=virtio,file=/homeusername/.talos/clusters/talos-default/pxe-1.disk -smp cpus=2 -cpu max -nographic -netdev tap,id=net0,ifname=tap0,script=no,downscript=no -device virtio-net-pci,netdev=net0,mac=36:f3:2f:c3:9f:06 -device virtio-rng-pci -no-reboot -boot order=cn,reboot-timeout=5000 -smbios type=1,uuid=ce12a0d0-29c8-490f-b935-f6073ab916a6 -machine q35,accel=kvm
sudo kill -s SIGTERM 1061663
sudo kill -s SIGTERM 1061663
Remove load balancer
Find the process of talosctl loadbalancer-launch
:
ps -elf | grep 'talosctl loadbalancer-launch'
To remove the LB manually, execute:
sudo kill -s SIGTERM <PID>
Example output, where loadbalancer is running with PID 157609
ps -elf | grep '[t]alosctl loadbalancer-launch'
4 S root 157609 2835 0 80 0 - 184998 - 07:53 ? 00:00:07 talosctl loadbalancer-launch --loadbalancer-addr 10.5.0.1 --loadbalancer-upstreams 10.5.0.2
sudo kill -s SIGTERM 157609
Remove DHCP server
Find the process of talosctl dhcpd-launch
:
ps -elf | grep 'talosctl dhcpd-launch'
To remove the LB manually, execute:
sudo kill -s SIGTERM <PID>
Example output, where loadbalancer is running with PID 157609
ps -elf | grep '[t]alosctl dhcpd-launch'
4 S root 157609 2835 0 80 0 - 184998 - 07:53 ? 00:00:07 talosctl dhcpd-launch --state-path /home/username/.talos/clusters/talos-default --addr 10.5.0.1 --interface talosbd9c32bc
sudo kill -s SIGTERM 157609
Remove network
This is more tricky part as if you have already deleted the state folder.
If you didn’t then it is written in the state.yaml
in the
~/.talos/clusters/<cluster-name>
directory.
sudo cat ~/.talos/clusters/<cluster-name>/state.yaml | grep bridgename
bridgename: talos<uuid>
If you only had one cluster, then it will be the interface with name
talos<uuid>
46: talos<uuid>: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether a6:72:f4:0a:d3:9c brd ff:ff:ff:ff:ff:ff
inet 10.5.0.1/24 brd 10.5.0.255 scope global talos17c13299
valid_lft forever preferred_lft forever
inet6 fe80::a472:f4ff:fe0a:d39c/64 scope link
valid_lft forever preferred_lft forever
To remove this interface:
sudo ip link del talos<uuid>
Remove state directory
To remove the state directory execute:
sudo rm -Rf /home/$USER/.talos/clusters/<cluster-name>
Troubleshooting
Logs
Inspect logs directory
sudo cat ~/.talos/clusters/<cluster-name>/*.log
Logs are saved under <cluster-name>-<role>-<node-id>.log
For example in case of k8s cluster name:
ls -la ~/.talos/clusters/k8s | grep log
-rw-r--r--. 1 root root 69415 Apr 26 20:58 k8s-master-1.log
-rw-r--r--. 1 root root 68345 Apr 26 20:58 k8s-worker-1.log
-rw-r--r--. 1 root root 24621 Apr 26 20:59 lb.log
Inspect logs during the installation
tail -f ~/.talos/clusters/<cluster-name>/*.log
4.3 - VirtualBox
In this guide we will create a Kubernetes cluster using VirtualBox.
Video Walkthrough
To see a live demo of this writeup, visit Youtube here:
Installation
How to Get VirtualBox
Install VirtualBox with your operating system package manager or from the website. For example, on Ubuntu for x86:
apt install virtualbox
Install talosctl
You can download talosctl
an MacOS and Linux via:
brew install siderolabs/tap/talosctl
For manually installation and other platform please see the talosctl installation guide.
Download ISO Image
Download the ISO image from Image Factory.
mkdir -p _out/
curl https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/<version>/metal-<arch>.iso -L -o _out/metal-<arch>.iso
For example version v1.10.0-alpha.0
for linux
platform:
mkdir -p _out/
curl https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.10.0-alpha.0/metal-amd64.iso -L -o _out/metal-amd64.iso
Create VMs
Start by creating a new VM by clicking the “New” button in the VirtualBox UI:
Supply a name for this VM, and specify the Type and Version:
Edit the memory to supply at least 2GB of RAM for the VM:
Proceed through the disk settings, keeping the defaults. You can increase the disk space if desired.
Once created, select the VM and hit “Settings”:
In the “System” section, supply at least 2 CPUs:
In the “Network” section, switch the network “Attached To” section to “Bridged Adapter”:
Finally, in the “Storage” section, select the optical drive and, on the right, select the ISO by browsing your filesystem:
Repeat this process for a second VM to use as a worker node. You can also repeat this for additional nodes desired.
Start Control Plane Node
Once the VMs have been created and updated, start the VM that will be the first control plane node.
This VM will boot the ISO image specified earlier and enter “maintenance mode”.
Once the machine has entered maintenance mode, there will be a console log that details the IP address that the node received.
Take note of this IP address, which will be referred to as $CONTROL_PLANE_IP
for the rest of this guide.
If you wish to export this IP as a bash variable, simply issue a command like export CONTROL_PLANE_IP=1.2.3.4
.
Generate Machine Configurations
With the IP address above, you can now generate the machine configurations to use for installing Talos and Kubernetes. Issue the following command, updating the output directory, cluster name, and control plane IP as you see fit:
talosctl gen config talos-vbox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out
This will create several files in the _out
directory: controlplane.yaml, worker.yaml, and talosconfig.
Create Control Plane Node
Using the controlplane.yaml
generated above, you can now apply this config using talosctl.
Issue:
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file _out/controlplane.yaml
You should now see some action in the VirtualBox console for this VM. Talos will be installed to disk, the VM will reboot, and then Talos will configure the Kubernetes control plane on this VM.
Note: This process can be repeated multiple times to create an HA control plane.
Create Worker Node
Create at least a single worker node using a process similar to the control plane creation above.
Start the worker node VM and wait for it to enter “maintenance mode”.
Take note of the worker node’s IP address, which will be referred to as $WORKER_IP
Issue:
talosctl apply-config --insecure --nodes $WORKER_IP --file _out/worker.yaml
Note: This process can be repeated multiple times to add additional workers.
Using the Cluster
Once the cluster is available, you can make use of talosctl
and kubectl
to interact with the cluster.
For example, to view current running containers, run talosctl containers
for a list of containers in the system
namespace, or talosctl containers -k
for the k8s.io
namespace.
To view the logs of a container, use talosctl logs <container>
or talosctl logs -k <container>
.
First, configure talosctl to talk to your control plane node by issuing the following, updating paths and IPs as necessary:
export TALOSCONFIG="_out/talosconfig"
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP
Bootstrap Etcd
Set the endpoints
and nodes
:
talosctl --talosconfig $TALOSCONFIG config endpoint <control plane 1 IP>
talosctl --talosconfig $TALOSCONFIG config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig $TALOSCONFIG bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig $TALOSCONFIG kubeconfig .
You can then use kubectl in this fashion:
kubectl get nodes
Cleaning Up
To cleanup, simply stop and delete the virtual machines from the VirtualBox UI.
5 - Single Board Computers
5.1 - Banana Pi M64
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image using Image Factory
The default schematic id for “vanilla” Banana Pi M64 is 8e11dcb3c2803fbe893ab201fcadf1ef295568410e7ced95c6c8b122a5070ce4
.
Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/8e11dcb3c2803fbe893ab201fcadf1ef295568410e7ced95c6c8b122a5070ce4/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/8e11dcb3c2803fbe893ab201fcadf1ef295568410e7ced95c6c8b122a5070ce4:v1.10.0-alpha.0
5.2 - Friendlyelec Nano PI R4S
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
The default schematic id for “vanilla” NanoPi R4S is 5f74a09891d5830f0b36158d3d9ea3b1c9cc019848ace08ff63ba255e38c8da4
.
Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/5f74a09891d5830f0b36158d3d9ea3b1c9cc019848ace08ff63ba255e38c8da4/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/5f74a09891d5830f0b36158d3d9ea3b1c9cc019848ace08ff63ba255e38c8da4:v1.10.0-alpha.0
5.3 - Jetson Nano
Prerequisites
You will need
talosctl
- an SD card/USB drive
- crane CLI
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Flashing the firmware to on-board SPI flash
Flashing the firmware only needs to be done once.
We will use the R32.7.2 release for the Jetson Nano.
Most of the instructions is similar to this doc except that we’d be using a upstream version of u-boot
with patches from NVIDIA u-boot so that USB boot also works.
Before flashing we need the following:
- A USB-A to micro USB cable
- A jumper wire to enable recovery mode
- A HDMI monitor to view the logs if the USB serial adapter is not available
- A USB to Serial adapter with 3.3V TTL (optional)
- A 5V DC barrel jack
If you’re planning to use the serial console follow the documentation here
First start by downloading the Jetson Nano L4T release.
curl -SLO https://developer.nvidia.com/embedded/l4t/r32_release_v7.1/t210/jetson-210_linux_r32.7.2_aarch64.tbz2
Next we will extract the L4T release and replace the u-boot
binary with the patched version.
tar xf jetson-210_linux_r32.6.1_aarch64.tbz2
cd Linux_for_Tegra
crane --platform=linux/arm64 export ghcr.io/siderolabs/sbc-jetson:v0.1.0 - | tar xf - --strip-components=4 -C bootloader/t210ref/p3450-0000/ artifacts/arm64/u-boot/jetson_nano/u-boot.bin
Next we will flash the firmware to the Jetson Nano SPI flash. In order to do that we need to put the Jetson Nano into Force Recovery Mode (FRC). We will use the instructions from here
- Ensure that the Jetson Nano is powered off. There is no need for the SD card/USB storage/network cable to be connected
- Connect the micro USB cable to the micro USB port on the Jetson Nano, don’t plug the other end to the PC yet
- Enable Force Recovery Mode (FRC) by placing a jumper across the FRC pins on the Jetson Nano
- For board revision A02, these are pins
3
and4
of headerJ40
- For board revision B01, these are pins
9
and10
of headerJ50
- For board revision A02, these are pins
- Place another jumper across
J48
to enable power from the DC jack and connect the Jetson Nano to the DC jackJ25
- Now connect the other end of the micro USB cable to the PC and remove the jumper wire from the FRC pins
Now the Jetson Nano is in Force Recovery Mode (FRC) and can be confirmed by running the following command
lsusb | grep -i "nvidia"
Now we can move on the flashing the firmware.
sudo ./flash p3448-0000-max-spi external
This will flash the firmware to the Jetson Nano SPI flash and you’ll see a lot of output. If you’ve connected the serial console you’ll also see the progress there. Once the flashing is done you can disconnect the USB cable and power off the Jetson Nano.
Download the Image
The default schematic id for “vanilla” Jetson Nano is c7d6f36c6bdfb45fd63178b202a67cff0dd270262269c64886b43f76880ecf1e
.
Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/c7d6f36c6bdfb45fd63178b202a67cff0dd270262269c64886b43f76880ecf1e/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
Now dd
the image to your SD card/USB storage:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M status=progress
| Replace /dev/mmcblk0
with the name of your SD card/USB storage.
Bootstrapping the Node
Insert the SD card/USB storage to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/c7d6f36c6bdfb45fd63178b202a67cff0dd270262269c64886b43f76880ecf1e:v1.10.0-alpha.0
5.4 - Libre Computer Board ALL-H3-CC
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
The default schematic id for “vanilla” Libretech H3 CC H5 is 5689d7795f91ac5bf6ccc85093fad8f8b27f6ea9d96a9ac5a059997bffd8ad5c
.
Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/5689d7795f91ac5bf6ccc85093fad8f8b27f6ea9d96a9ac5a059997bffd8ad5c/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node.
Create a installer-patch.yaml
containing reference to the installer
image generated from an overlay:
Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/5689d7795f91ac5bf6ccc85093fad8f8b27f6ea9d96a9ac5a059997bffd8ad5c:v1.10.0-alpha.0
5.5 - Orange Pi R1 Plus LTS
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image using Image Factory
The default schematic id for “vanilla” Orange Pi R1 Plus LTS is da388062cd9318efdc7391982a77ebb2a97ed4fbda68f221354c17839a750509
.
Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/da388062cd9318efdc7391982a77ebb2a97ed4fbda68f221354c17839a750509/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/da388062cd9318efdc7391982a77ebb2a97ed4fbda68f221354c17839a750509:v1.10.0-alpha.0
5.6 - Pine64
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
The default schematic id for “vanilla” Pine64 is 185431e0f0bf34c983c6f47f4c6d3703aa2f02cd202ca013216fd71ffc34e175
.
Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/185431e0f0bf34c983c6f47f4c6d3703aa2f02cd202ca013216fd71ffc34e175/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/185431e0f0bf34c983c6f47f4c6d3703aa2f02cd202ca013216fd71ffc34e175:v1.10.0-alpha.0
5.7 - Pine64 Rock64
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
The default schematic id for “vanilla” Pine64 Rock64 is 0e162298269125049a51ec0a03c2ef85405a55e1d2ac36a7ef7292358cf3ce5a
.
Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/0e162298269125049a51ec0a03c2ef85405a55e1d2ac36a7ef7292358cf3ce5a/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
The path to your SD card can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/0e162298269125049a51ec0a03c2ef85405a55e1d2ac36a7ef7292358cf3ce5a:v1.10.0-alpha.0
5.8 - Radxa ROCK 4C Plus
Prerequisites
You will need
talosctl
- an SD card or an eMMC or USB drive or an nVME drive
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
The default schematic id for “vanilla” Rock 4c Plus is ed7091ab924ef1406dadc4623c90f245868f03d262764ddc2c22c8a19eb37c1c
.
Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/ed7091ab924ef1406dadc4623c90f245868f03d262764ddc2c22c8a19eb37c1c/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
The path to your SD card/eMMC/USB/nVME can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M
The user has two options to proceed:
- booting from a SD card or eMMC
Booting from SD card or eMMC
Insert the SD card into the board, turn it on and proceed to bootstrapping the node.
Bootstrapping the Node
Wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/ed7091ab924ef1406dadc4623c90f245868f03d262764ddc2c22c8a19eb37c1c:v1.10.0-alpha.0
5.9 - Radxa ROCK PI 4
Prerequisites
You will need
talosctl
- an SD card or an eMMC or USB drive or an nVME drive
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
The default schematic id for “vanilla” RockPi 4 is 25d2690bb48685de5939edd6dee83a0e09591311e64ad03c550de00f8a521f51
.
Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/25d2690bb48685de5939edd6dee83a0e09591311e64ad03c550de00f8a521f51/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
The path to your SD card/eMMC/USB/nVME can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M
The user has two options to proceed:
- booting from a SD card or eMMC
- booting from a USB or nVME (requires the RockPi board to have the SPI flash)
Booting from SD card or eMMC
Insert the SD card into the board, turn it on and proceed to bootstrapping the node.
Booting from USB or nVME
This requires the user to flash the RockPi SPI flash with u-boot.
Follow the Radxa docs on Install on M.2 NVME SSD
After these above steps, Talos will boot from the nVME/USB and enter maintenance mode. Proceed to bootstrapping the node.
Bootstrapping the Node
Wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/25d2690bb48685de5939edd6dee83a0e09591311e64ad03c550de00f8a521f51:v1.10.0-alpha.0
5.10 - Radxa ROCK PI 4C
Prerequisites
You will need
talosctl
- an SD card or an eMMC or USB drive or an nVME drive
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
The default schematic id for “vanilla” RockPi 4c is 08e72e242b71f42c9db5bed80e8255b2e0d442a372bc09055b79537d9e3ce191
.
Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/08e72e242b71f42c9db5bed80e8255b2e0d442a372bc09055b79537d9e3ce191/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
The path to your SD card/eMMC/USB/nVME can be found using fdisk
on Linux or diskutil
on macOS.
In this example, we will assume /dev/mmcblk0
.
Now dd
the image to your SD card:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M
The user has two options to proceed:
- booting from a SD card or eMMC
- booting from a USB or nVME (requires the RockPi board to have the SPI flash)
Booting from SD card or eMMC
Insert the SD card into the board, turn it on and proceed to bootstrapping the node.
Booting from USB or nVME
This requires the user to flash the RockPi SPI flash with u-boot.
Follow the Radxa docs on Install on M.2 NVME SSD
After these above steps, Talos will boot from the nVME/USB and enter maintenance mode. Proceed to bootstrapping the node.
Bootstrapping the Node
Wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/08e72e242b71f42c9db5bed80e8255b2e0d442a372bc09055b79537d9e3ce191:v1.10.0-alpha.0
5.11 - Raspberry Pi Series
Talos disk image for the Raspberry Pi generic should in theory work for the boards supported by u-boot rpi_arm64_defconfig
.
This has only been officialy tested on the Raspberry Pi 4 and community tested on one variant of the Compute Module 4 using Super 6C boards.
If you have tested this on other Raspberry Pi boards, please let us know.
Video Walkthrough
To see a live demo of this writeup, see the video below:
Prerequisites
You will need
talosctl
- an SD card
Download the latest talosctl
.
curl -sL 'https://www.talos.dev/install' | bash
Updating the EEPROM
Use Raspberry Pi Imager to write an EEPROM update image to a spare SD card. Select Misc utility images under the Operating System tab.
Remove the SD card from your local machine and insert it into the Raspberry Pi. Power the Raspberry Pi on, and wait at least 10 seconds. If successful, the green LED light will blink rapidly (forever), otherwise an error pattern will be displayed. If an HDMI display is attached to the port closest to the power/USB-C port, the screen will display green for success or red if a failure occurs. Power off the Raspberry Pi and remove the SD card from it.
Note: Updating the bootloader only needs to be done once.
Download the Image
The default schematic id for “vanilla” Raspberry Pi generic image is ee21ef4a5ef808a9b7484cc0dda0f25075021691c8c09a276591eedb638ea1f9
.Refer to the Image Factory documentation for more information.
Download the image and decompress it:
curl -LO https://factory.talos.dev/image/ee21ef4a5ef808a9b7484cc0dda0f25075021691c8c09a276591eedb638ea1f9/v1.10.0-alpha.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Writing the Image
Now dd
the image to your SD card:
sudo dd if=metal-arm64.raw of=/dev/mmcblk0 conv=fsync bs=4M
Bootstrapping the Node
Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Once the interactive installation is applied, the cluster will form and you can then use kubectl
.
Note: if you have an HDMI display attached and it shows only a rainbow splash, please use the other HDMI port, the one closest to the power/USB-C port.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
Upgrading
For example, to upgrade to the latest version of Talos, you can run:
talosctl -n <node IP or DNS name> upgrade --image=factory.talos.dev/installer/ee21ef4a5ef808a9b7484cc0dda0f25075021691c8c09a276591eedb638ea1f9:v1.10.0-alpha.0
Troubleshooting
The following table can be used to troubleshoot booting issues:
Long Flashes | Short Flashes | Status |
---|---|---|
0 | 3 | Generic failure to boot |
0 | 4 | start*.elf not found |
0 | 7 | Kernel image not found |
0 | 8 | SDRAM failure |
0 | 9 | Insufficient SDRAM |
0 | 10 | In HALT state |
2 | 1 | Partition not FAT |
2 | 2 | Failed to read from partition |
2 | 3 | Extended partition not FAT |
2 | 4 | File signature/hash mismatch - Pi 4 |
4 | 4 | Unsupported board type |
4 | 5 | Fatal firmware error |
4 | 6 | Power failure type A |
4 | 7 | Power failure type B |
5.12 - Turing RK1
Prerequisites
Before you start, ensure you have:
Download the latest talosctl
.
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v1.10.0-alpha.0/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
Download the Image
Go to https://factory.talos.dev
select Single Board Computers
, select the version and select Turing RK1
from the options.
Choose your desired extensions and fill in the kernel command line arguments if needed.
Download the disk image and decompress it:
curl -LO https://factory.talos.dev/image/[uuid]/v1.9.0/metal-arm64.raw.xz
xz -d metal-arm64.raw.xz
Boot options
You can boot Talos from:
- booting from eMMC
- booting from a USB or NVMe (requires a spi image on the eMMC)
Booting from eMMC
Flash the image to the eMMC and power on the node: (or use the WebUI of the Turing Pi 2)
tpi flash -n <NODENUMBER> -i metal-arm64.raw
tpi power on -n <NODENUMBER>
Proceed to bootstrapping the node.
Booting from USB or NVMe
Requirements
To boot from USB or NVMe, flash a u-boot SPI image (part of the SBC overlay) to the eMMC.
Steps
Skip step 1 if you already installed your NVMe drive.
If you have a USB to NVMe adapter, write Talos image to the USB drive:
sudo dd if=metal-arm64.raw of=/dev/sda
Install the NVMe drive in the Turing Pi 2 board.
If the NVMe drive is/was already installed:
Flash the Turing RK1 variant of Ubuntu to the eMMC.
Boot into the Ubuntu image and write the Talos image directly to the NVMe drive:
sudo dd if=metal-arm64.raw of=/dev/nvme0n1
Find the latest
sbc-rockchip
overlay, download and extract the SBC overlay image:Find the latest release tag of the sbc-rockchip repo.
Download the sbc overlay image and extract the SPI image:
crane --platform=linux/arm64 export ghcr.io/siderolabs/sbc-rockchip:<releasetag> | tar x --strip-components=4 artifacts/arm64/u-boot/turingrk1/u-boot-rockchip-spi.bin
Flash the eMMC with the Talos raw image (even if Talos was previously installed): (or use the WebUI of the Turing Pi 2)
tpi flash -n <NODENUMBER> -i metal-turing_rk1-arm64.raw
Flash the SPI image to set the boot order and remove unnecessary partitions: (or use the WebUI of the Turing Pi 2)
tpi flash -n <NODENUMBER> -i u-boot-rockchip-spi.bin tpi power on -n <NODENUMBER>
Talos will now boot from the NVMe/USB and enter maintenance mode.
Bootstrapping the Node
To monitor boot messages, run: (repeat)
tpi uart -n <NODENUMBER> get
Wait until instructions for bootstrapping appear. Follow the UART instructions to connect to the interactive installer:
talosctl apply-config --insecure --mode=interactive --nodes <node IP or DNS name>
Alternatively, generate and apply a configuration:
talosctl gen config
talosctl apply-config --insecure --nodes <node IP or DNS name> -f <worker/controlplane>.yaml
Copy your talosconfig
to ~/.talos/config
and fill in the node
field with the IP address of the node and endpoints.
Once applied, the cluster will form, and you can use kubectl
.
Retrieve the kubeconfig
Retrieve the admin kubeconfig
by running:
talosctl kubeconfig
6 - Boot Assets
Talos Linux provides boot images via Image Factory, but these images can be customized further for a specific use case:
- adding system extensions
- updating kernel command line arguments
- using custom
META
contents, e.g. for metal network configuration - generating SecureBoot images signed with a custom key
- generating disk images for SBC’s (Single Board Computers)
There are two ways to generate Talos boot assets:
- using Image Factory service (recommended)
- manually using imager container image (advanced)
Image Factory is easier to use, but it only produces images for official Talos Linux releases, official Talos Linux system extensions and official Talos Overlays.
The imager
container can be used to generate images from main
branch, with local changes, or with custom system extensions.
Image Factory
Image Factory is a service that generates Talos boot assets on-demand. Image Factory allows to generate boot assets for the official Talos Linux releases, official Talos Linux system extensions and official Talos Overlays.
The main concept of the Image Factory is a schematic which defines the customization of the boot asset. Once the schematic is configured, Image Factory can be used to pull various Talos Linux images, ISOs, installer images, PXE booting bare-metal machines across different architectures, versions of Talos and platforms.
Sidero Labs maintains a public Image Factory instance at https://factory.talos.dev. Image Factory provides a simple UI to prepare schematics and retrieve asset links.
Example: Bare-metal with Image Factory
Let’s assume we want to boot Talos on a bare-metal machine with Intel CPU and add a gvisor
container runtime to the image.
Also we want to disable predictable network interface names with net.ifnames=0
kernel argument.
First, let’s create the schematic file bare-metal.yaml
:
# bare-metal.yaml
customization:
extraKernelArgs:
- net.ifnames=0
systemExtensions:
officialExtensions:
- siderolabs/gvisor
- siderolabs/intel-ucode
The schematic doesn’t contain system extension versions, Image Factory will pick the correct version matching Talos Linux release.
And now we can upload the schematic to the Image Factory to retrieve its ID:
$ curl -X POST --data-binary @bare-metal.yaml https://factory.talos.dev/schematics
{"id":"b8e8fbbe1b520989e6c52c8dc8303070cb42095997e76e812fa8892393e1d176"}
The returned schematic ID b8e8fbbe1b520989e6c52c8dc8303070cb42095997e76e812fa8892393e1d176
we will use to generate the boot assets.
The schematic ID is based on the schematic contents, so uploading the same schematic will return the same ID.
Now we have two options to boot our bare-metal machine:
- using ISO image: https://factory.talos.dev/image/b8e8fbbe1b520989e6c52c8dc8303070cb42095997e76e812fa8892393e1d176/v1.10.0-alpha.0/metal-amd64.iso (download it and burn to a CD/DVD or USB stick)
- PXE booting via iPXE script: https://factory.talos.dev/pxe/b8e8fbbe1b520989e6c52c8dc8303070cb42095997e76e812fa8892393e1d176/v1.10.0-alpha.0/metal-amd64
The Image Factory URL contains both schematic ID and Talos version, and both can be changed to generate different boot assets.
Once the bare-metal machine is booted up for the first time, it will require Talos Linux installer
image to be installed on the disk.
The installer
image will be produced by the Image Factory as well:
# Talos machine configuration patch
machine:
install:
image: factory.talos.dev/installer/b8e8fbbe1b520989e6c52c8dc8303070cb42095997e76e812fa8892393e1d176:v1.10.0-alpha.0
Once installed, the machine can be upgraded to a new version of Talos by referencing new installer image:
talosctl upgrade --image factory.talos.dev/installer/b8e8fbbe1b520989e6c52c8dc8303070cb42095997e76e812fa8892393e1d176:<new_version>
Same way upgrade process can be used to transition to a new set of system extensions: generate new schematic with the new set of system extensions, and upgrade the machine to the new schematic ID:
talosctl upgrade --image factory.talos.dev/installer/<new_schematic_id>:v1.10.0-alpha.0
Example: Raspberry Pi generic with Image Factory
Let’s assume we want to boot Talos on a Raspberry Pi with iscsi-tools
system extension.
First, let’s create the schematic file rpi_generic.yaml
:
# rpi_generic.yaml
overlay:
name: rpi_generic
image: siderolabs/sbc-raspberrypi
customization:
systemExtensions:
officialExtensions:
- siderolabs/iscsi-tools
The schematic doesn’t contain any system extension or overlay versions, Image Factory will pick the correct version matching Talos Linux release.
And now we can upload the schematic to the Image Factory to retrieve its ID:
$ curl -X POST --data-binary @rpi_generic.yaml https://factory.talos.dev/schematics
{"id":"0db665edfda21c70194e7ca660955425d16cec2aa58ff031e2abf72b7c328585"}
The returned schematic ID 0db665edfda21c70194e7ca660955425d16cec2aa58ff031e2abf72b7c328585
we will use to generate the boot assets.
The schematic ID is based on the schematic contents, so uploading the same schematic will return the same ID.
Now we can download the metal arm64 image:
- https://factory.talos.dev/image/0db665edfda21c70194e7ca660955425d16cec2aa58ff031e2abf72b7c328585/v1.10.0-alpha.0/metal-arm64.raw.xz (download it and burn to a boot media)
The Image Factory URL contains both schematic ID and Talos version, and both can be changed to generate different boot assets.
Once installed, the machine can be upgraded to a new version of Talos by referencing new installer image:
talosctl upgrade --image factory.talos.dev/installer/0db665edfda21c70194e7ca660955425d16cec2aa58ff031e2abf72b7c328585:<new_version>
Same way upgrade process can be used to transition to a new set of system extensions: generate new schematic with the new set of system extensions, and upgrade the machine to the new schematic ID:
talosctl upgrade --image factory.talos.dev/installer/<new_schematic_id>:v1.10.0-alpha.0
Example: AWS with Image Factory
Talos Linux is installed on AWS from a disk image (AWS AMI), so only a single boot asset is required.
Let’s assume we want to boot Talos on AWS with gvisor
container runtime system extension.
First, let’s create the schematic file aws.yaml
:
# aws.yaml
customization:
systemExtensions:
officialExtensions:
- siderolabs/gvisor
And now we can upload the schematic to the Image Factory to retrieve its ID:
$ curl -X POST --data-binary @aws.yaml https://factory.talos.dev/schematics
{"id":"d9ff89777e246792e7642abd3220a616afb4e49822382e4213a2e528ab826fe5"}
The returned schematic ID d9ff89777e246792e7642abd3220a616afb4e49822382e4213a2e528ab826fe5
we will use to generate the boot assets.
Now we can download the AWS disk image from the Image Factory:
curl -LO https://factory.talos.dev/image/d9ff89777e246792e7642abd3220a616afb4e49822382e4213a2e528ab826fe5/v1.10.0-alpha.0/aws-amd64.raw.xz
Now the aws-amd64.raw.xz
file contains the customized Talos AWS disk image which can be uploaded as an AMI to the AWS.
Once the AWS VM is created from the AMI, it can be upgraded to a different Talos version or a different schematic using talosctl upgrade
:
# upgrade to a new Talos version
talosctl upgrade --image factory.talos.dev/installer/d9ff89777e246792e7642abd3220a616afb4e49822382e4213a2e528ab826fe5:<new_version>
# upgrade to a new schematic
talosctl upgrade --image factory.talos.dev/installer/<new_schematic_id>:v1.10.0-alpha.0
Imager
A custom disk image, boot asset can be generated by using the Talos Linux imager
container: ghcr.io/siderolabs/imager:v1.10.0-alpha.0
.
The imager
container image can be checked by verifying its signature.
The generation process can be run with a simple docker run
command:
docker run --rm -t -v $PWD/_out:/secureboot:ro -v $PWD/_out:/out -v /dev:/dev --privileged ghcr.io/siderolabs/imager:v1.10.0-alpha.0 <image-kind> [optional: customization]
A quick guide to the flags used for docker run
:
--rm
flag removes the container after the run (as it’s not going to be used anymore)-t
attaches a terminal for colorized output, it can be removed if used in scripts-v $PWD/_out:/secureboot:ro
mounts the SecureBoot keys into the container (can be skipped if not generating SecureBoot image)-v $PWD/_out:/out
mounts the output directory (where the generated image will be placed) into the container-v /dev:/dev --privileged
is required to generate disk images (loop devices are used), but not required for ISOs, installer container images
The <image-kind>
argument to the imager
defines the base profile to be used for the image generation.
There are several built-in profiles:
iso
builds a Talos ISO image (see ISO)secureboot-iso
builds a Talos ISO image with SecureBoot (see SecureBoot)metal
builds a generic disk image for bare-metal machinessecureboot-metal
builds a generic disk image for bare-metal machines with SecureBootsecureboot-installer
builds an installer container image with SecureBoot (see SecureBoot)aws
,gcp
,azure
, etc. builds a disk image for a specific Talos platform
The base profile can be customized with the additional flags to the imager:
--arch
specifies the architecture of the image to be generated (default: host architecture)--meta
allows to set initialMETA
values--extra-kernel-arg
allows to customize the kernel command line arguments. Default kernel arg can be removed by prefixing the argument with a-
. For example-console
removes allconsole=<value>
arguments, whereas-console=tty0
removes theconsole=tty0
default argument.--system-extension-image
allows to install a system extension into the image--image-cache
allows to use a local image cache
Extension Image Reference
While Image Factory automatically resolves the extension name into a matching container image for a specific version of Talos, imager
requires the full explicit container image reference.
The imager
also allows to install custom extensions which are not part of the official Talos Linux system extensions.
To get the official Talos Linux system extension container image reference matching a Talos release, use the following command:
crane export ghcr.io/siderolabs/extensions:v1.10.0-alpha.0 | tar x -O image-digests | grep EXTENSION-NAME
Note: this command is using crane tool, but any other tool which allows to export the image contents can be used.
For each Talos release, the ghcr.io/siderolabs/extensions:VERSION
image contains a pinned reference to each system extension container image.
Overlay Image Reference
While Image Factory automatically resolves the overlay name into a matching container image for a specific version of Talos, imager
requires the full explicit container image reference.
The imager
also allows to install custom overlays which are not part of the official Talos overlays.
To get the official Talos overlays container image reference matching a Talos release, use the following command:
crane export ghcr.io/siderolabs/overlays:v1.10.0-alpha.0 | tar x -O overlays.yaml
Note: this command is using crane tool, but any other tool which allows to export the image contents can be used.
For each Talos release, the ghcr.io/siderolabs/overlays:VERSION
image contains a pinned reference to each overlay container image.
Pulling from Private Registries
Talos Linux official images are all public, but when pulling a custom image from a private registry, the imager
might need authentication to access the images.
The imager
container when pulling images supports following methods to authenticate to an external registry:
- for
ghcr.io
registry,GITHUB_TOKEN
can be provided as an environment variable; - for other registries,
~/.docker/config.json
can be mounted into the container from the host:- another option is to use a
DOCKER_CONFIG
environment variable, and the path will be$DOCKER_CONFIG/config.json
in the container; - the third option is to mount Podman’s auth file at
$XDG_RUNTIME_DIR/containers/auth.json
.
- another option is to use a
Example: Bare-metal with Imager
Let’s assume we want to boot Talos on a bare-metal machine with Intel CPU and add a gvisor
container runtime to the image.
Also we want to disable predictable network interface names with net.ifnames=0
kernel argument and replace the Talos default console
arguments and add a custom console
arg.
First, let’s lookup extension images for Intel CPU microcode updates and gvisor
container runtime in the extensions repository:
$ crane export ghcr.io/siderolabs/extensions:v1.10.0-alpha.0 | tar x -O image-digests | grep -E 'gvisor|intel-ucode'
ghcr.io/siderolabs/gvisor:20231214.0-v1.10.0-alpha.0@sha256:548b2b121611424f6b1b6cfb72a1669421ffaf2f1560911c324a546c7cee655e
ghcr.io/siderolabs/intel-ucode:20231114@sha256:ea564094402b12a51045173c7523f276180d16af9c38755a894cf355d72c249d
Now we can generate the ISO image with the following command:
$ docker run --rm -t -v $PWD/_out:/out ghcr.io/siderolabs/imager:v1.10.0-alpha.0 iso --system-extension-image ghcr.io/siderolabs/gvisor:20231214.0-v1.10.0-alpha.0@sha256:548b2b121611424f6b1b6cfb72a1669421ffaf2f1560911c324a546c7cee655e --system-extension-image ghcr.io/siderolabs/intel-ucode:20231114@sha256:ea564094402b12a51045173c7523f276180d16af9c38755a894cf355d72c249d --extra-kernel-arg net.ifnames=0 --extra-kernel-arg=-console --extra-kernel-arg=console=ttyS1
profile ready:
arch: amd64
platform: metal
secureboot: false
version: v1.10.0-alpha.0
customization:
extraKernelArgs:
- net.ifnames=0
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.10.0-alpha.0
systemExtensions:
- imageRef: ghcr.io/siderolabs/gvisor:20231214.0-v1.10.0-alpha.0@sha256:548b2b121611424f6b1b6cfb72a1669421ffaf2f1560911c324a546c7cee655e
- imageRef: ghcr.io/siderolabs/intel-ucode:20231114@sha256:ea564094402b12a51045173c7523f276180d16af9c38755a894cf355d72c249d
output:
kind: iso
outFormat: raw
initramfs ready
kernel command line: talos.platform=metal console=ttyS1 init_on_alloc=1 slab_nomerge pti=on consoleblank=0 nvme_core.io_timeout=4294967295 printk.devkmsg=on ima_template=ima-ng ima_appraise=fix ima_hash=sha512 net.ifnames=0
ISO ready
output asset path: /out/metal-amd64.iso
Now the _out/metal-amd64.iso
contains the customized Talos ISO image.
If the machine is going to be booted using PXE, we can instead generate kernel and initramfs images:
docker run --rm -t -v $PWD/_out:/out ghcr.io/siderolabs/imager:v1.10.0-alpha.0 iso --output-kind kernel
docker run --rm -t -v $PWD/_out:/out ghcr.io/siderolabs/imager:v1.10.0-alpha.0 iso --output-kind initramfs --system-extension-image ghcr.io/siderolabs/gvisor:20231214.0-v1.10.0-alpha.0@sha256:548b2b121611424f6b1b6cfb72a1669421ffaf2f1560911c324a546c7cee655e --system-extension-image ghcr.io/siderolabs/intel-ucode:20231114@sha256:ea564094402b12a51045173c7523f276180d16af9c38755a894cf355d72c249d
Now the _out/kernel-amd64
and _out/initramfs-amd64
contain the customized Talos kernel and initramfs images.
Note: the extra kernel args are not used now, as they are set via the PXE boot process, and can’t be embedded into the kernel or initramfs.
As the next step, we should generate a custom installer
image which contains all required system extensions (kernel args can’t be specified with the installer image, but they are set in the machine configuration):
$ docker run --rm -t -v $PWD/_out:/out ghcr.io/siderolabs/imager:v1.10.0-alpha.0 installer --system-extension-image ghcr.io/siderolabs/gvisor:20231214.0-v1.10.0-alpha.0@sha256:548b2b121611424f6b1b6cfb72a1669421ffaf2f1560911c324a546c7cee655e --system-extension-image ghcr.io/siderolabs/intel-ucode:20231114@sha256:ea564094402b12a51045173c7523f276180d16af9c38755a894cf355d72c249d
...
output asset path: /out/metal-amd64-installer.tar
The installer
container image should be pushed to the container registry:
crane push _out/metal-amd64-installer.tar ghcr.io/<username></username>/installer:v1.10.0-alpha.0
Now we can use the customized installer
image to install Talos on the bare-metal machine.
When it’s time to upgrade a machine, a new installer
image can be generated using the new version of imager
, and updating the system extension images to the matching versions.
The custom installer
image can now be used to upgrade Talos machine.
Example: Raspberry Pi overlay with Imager
Let’s assume we want to boot Talos on Raspberry Pi with rpi_generic
overlay and iscsi-tools
system extension.
First, let’s lookup extension images for iscsi-tools
in the extensions repository:
$ crane export ghcr.io/siderolabs/extensions:v1.10.0-alpha.0 | tar x -O image-digests | grep -E 'iscsi-tools'
ghcr.io/siderolabs/iscsi-tools:v0.1.4@sha256:548b2b121611424f6b1b6cfb72a1669421ffaf2f1560911c324a546c7cee655e
Next we’ll lookup the overlay image for rpi_generic
in the overlays repository:
$ crane export ghcr.io/siderolabs/overlays:v1.10.0-alpha.0 | tar x -O overlays.yaml | yq '.overlays[] | select(.name=="rpi_generic")'
name: rpi_generic
image: ghcr.io/siderolabs/sbc-raspberrypi:v0.1.0
digest: sha256:849ace01b9af514d817b05a9c5963a35202e09a4807d12f8a3ea83657c76c863
Now we can generate the metal image with the following command:
$ docker run --rm -t -v $PWD/_out:/out ghcr.io/siderolabs/imager:v1.10.0-alpha.0 rpi_generic --arch arm64 --system-extension-image ghcr.io/siderolabs/iscsi-tools:v0.1.4@sha256:548b2b121611424f6b1b6cfb72a1669421ffaf2f1560911c324a546c7cee655e --overlay-image ghcr.io/siderolabs/sbc-raspberrypi:v0.1.0@sha256:849ace01b9af514d817b05a9c5963a35202e09a4807d12f8a3ea83657c76c863 --overlay-name=rpi_generic
profile ready:
arch: arm64
platform: metal
secureboot: false
version: v1.10.0-alpha.0
input:
kernel:
path: /usr/install/arm64/vmlinuz
initramfs:
path: /usr/install/arm64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.10.0-alpha.0
systemExtensions:
- imageRef: ghcr.io/siderolabs/iscsi-tools:v0.1.4@sha256:a68c268d40694b7b93c8ac65d6b99892a6152a2ee23fdbffceb59094cc3047fc
overlay:
name: rpi_generic
image:
imageRef: ghcr.io/siderolabs/sbc-raspberrypi:v0.1.0-alpha.1@sha256:849ace01b9af514d817b05a9c5963a35202e09a4807d12f8a3ea83657c76c863
output:
kind: image
imageOptions:
diskSize: 1306525696
diskFormat: raw
outFormat: .xz
initramfs ready
kernel command line: talos.platform=metal console=tty0 console=ttyAMA0,115200 sysctl.kernel.kexec_load_disabled=1 talos.dashboard.disabled=1 init_on_alloc=1 slab_nomerge pti=on consoleblank=0 nvme_core.io_timeout=4294967295 printk.devkmsg=on ima_template=ima-ng ima_appraise=fix ima_hash=sha512
disk image ready
output asset path: /out/metal-arm64.raw
compression done: /out/metal-arm64.raw.xz
Now the _out/metal-arm64.raw.xz
is the compressed disk image which can be written to a boot media.
As the next step, we should generate a custom installer
image which contains all required system extensions (kernel args can’t be specified with the installer image, but they are set in the machine configuration):
$ docker run --rm -t -v $PWD/_out:/out ghcr.io/siderolabs/imager:v1.10.0-alpha.0 installer --arch arm64 --system-extension-image ghcr.io/siderolabs/iscsi-tools:v0.1.4@sha256:548b2b121611424f6b1b6cfb72a1669421ffaf2f1560911c324a546c7cee655e --overlay-image ghcr.io/siderolabs/sbc-raspberrypi:v0.1.0@sha256:849ace01b9af514d817b05a9c5963a35202e09a4807d12f8a3ea83657c76c863 --overlay-name=rpi_generic
...
output asset path: /out/metal-arm64-installer.tar
The installer
container image should be pushed to the container registry:
crane push _out/metal-arm64-installer.tar ghcr.io/<username></username>/installer:v1.10.0-alpha.0
Now we can use the customized installer
image to install Talos on Raspvberry Pi.
When it’s time to upgrade a machine, a new installer
image can be generated using the new version of imager
, and updating the system extension and overlay images to the matching versions.
The custom installer
image can now be used to upgrade Talos machine.
Example: AWS with Imager
Talos is installed on AWS from a disk image (AWS AMI), so only a single boot asset is required.
Let’s assume we want to boot Talos on AWS with gvisor
container runtime system extension.
First, let’s lookup extension images for the gvisor
container runtime in the extensions repository:
$ crane export ghcr.io/siderolabs/extensions:v1.10.0-alpha.0 | tar x -O image-digests | grep gvisor
ghcr.io/siderolabs/gvisor:20231214.0-v1.10.0-alpha.0@sha256:548b2b121611424f6b1b6cfb72a1669421ffaf2f1560911c324a546c7cee655e
Next, let’s generate AWS disk image with that system extension:
$ docker run --rm -t -v $PWD/_out:/out -v /dev:/dev --privileged ghcr.io/siderolabs/imager:v1.10.0-alpha.0 aws --system-extension-image ghcr.io/siderolabs/gvisor:20231214.0-v1.10.0-alpha.0@sha256:548b2b121611424f6b1b6cfb72a1669421ffaf2f1560911c324a546c7cee655e
...
output asset path: /out/aws-amd64.raw
compression done: /out/aws-amd64.raw.xz
Now the _out/aws-amd64.raw.xz
contains the customized Talos AWS disk image which can be uploaded as an AMI to the AWS.
If the AWS machine is later going to be upgraded to a new version of Talos (or a new set of system extensions), generate a customized installer
image following the steps above, and upgrade Talos to that installer
image.
Example: Assets with system extensions from image tarballs with Imager
Some advanced features of imager
are currently not exposed via command line arguments like --system-extension-image
.
To access them nonetheless it is possible to supply imager
with a profile.yaml
instead.
Let’s use these advanced features to build a bare-metal installer using a system extension from a private registry.
First use crane
on a host with access to the private registry to export the extension image into a tarball.
crane export <your-private-registry>/<your-extension>:latest <your-extension>
When can then reference the tarball in a suitable profile.yaml
for our intended architecture and output.
In this case we want to build an amd64
, bare-metal installer.
# profile.yaml
arch: amd64
platform: metal
secureboot: false
version: v1.10.0-alpha.0
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.10.0-alpha.0
systemExtensions:
- tarballPath: <your-extension> # notice we use 'tarballPath' instead of 'imageRef'
output:
kind: installer
outFormat: raw
To build the asset we pass profile.yaml
to imager
via stdin
$ cat profile.yaml | docker run --rm -i \
-v $PWD/_out:/out -v $PWD/<your-extension>:/<your-extension> \
ghcr.io/siderolabs/imager:v1.10.0-alpha.0 -
7 - Omni SaaS
Omni allows you to start with bare metal, virtual machines or a cloud provider, and create clusters spanning all of your locations, with a few clicks.
You provide the machines – edge compute, bare metal, VMs, or in your cloud account. Boot from an Omni Talos Linux image. Click to allocate to a cluster. That’s it!
- Vanilla Kubernetes, on your machines, under your control.
- Elegant UI for management and operations
- Security taken care of – ties into your Enterprise ID provider
- Highly Available Kubernetes API end point built in
- Firewall friendly: manage Edge nodes securely
- From single-node clusters to the largest scale
- Support for GPUs and most CSIs.
The Omni SaaS is available to run locally, to support air-gapped security and data sovereignty concerns.
Omni handles the lifecycle of Talos Linux machines, provides unified access to the Talos and Kubernetes API tied to the identity provider of your choice, and provides a UI for cluster management and operations. Omni automates scaling the clusters up and down, and provides a unified view of the state of your clusters.
See more in the Omni documentation.
8 - talosctl
Recommended
The client can be installed and updated via the Homebrew package manager for macOS and Linux.
You will need to install brew
and then you can install talosctl
from the Sidero Labs tap.
brew install siderolabs/tap/talosctl
This will also keep your version of talosctl
up to date with new releases.
This homebrew tap also has formulae for omnictl
if you need to install that package.
Note: Your
talosctl
version should match the version of Talos Linux you are running on a host. To install a specific version oftalosctl
withbrew
you can follow this github issue.
Alternative install
You can automatically install the correct version of talosctl
for your operating system and architecture with an installer script.
This script won’t keep your version updated with releases and you will need to re-run the script to download a new version.
curl -sL https://talos.dev/install | sh
This script will work on macOS, Linux, and WSL on Windows. It supports amd64 and arm64 architecture.
Manual and Windows install
All versions can be manually downloaded from the talos releases page including Linux, macOS, and Windows.
You will need to add the binary to a folder part of your executable $PATH
to use it without providing the full path to the executable.
Updating the binary will be a manual process.