This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Welcome

Welcome

Welcome to the Talos documentation. If you are just getting familiar with Talos, we recommend starting here:

  • What is Talos: a quick description of Talos
  • Quickstart: the fastest way to get a Talos cluster up and running
  • Getting Started: a long-form, guided tour of getting a full Talos cluster deployed

Open Source

Community

If you’re interested in this project and would like to help in engineering efforts, or have general usage questions, we are happy to have you! We hold a weekly meeting that all audiences are welcome to attend.

We would appreciate your feedback so that we can make Talos even better! To do so, you can take our survey.

Office Hours

You can subscribe to this meeting by joining the community forum above.

Enterprise

If you are using Talos in a production setting, and need consulting services to get started or to integrate Talos into your existing environment, we can help. Sidero Labs, Inc. offers support contracts with SLA (Service Level Agreement)-bound terms for mission-critical environments.

Learn More

1 - Introduction

1.1 - What is Talos?

Talos is a container optimized Linux distro; a reimagining of Linux for distributed systems such as Kubernetes. Designed to be as minimal as possible while still maintaining practicality. For these reasons, Talos has a number of features unique to it:

  • it is immutable
  • it is atomic
  • it is ephemeral
  • it is minimal
  • it is secure by default
  • it is managed via a single declarative configuration file and gRPC API

Talos can be deployed on container, cloud, virtualized, and bare metal platforms.

Why Talos

In having less, Talos offers more. Security. Efficiency. Resiliency. Consistency.

All of these areas are improved simply by having less.

1.2 - Quickstart

The easiest way to try Talos is by using the CLI (talosctl) to create a cluster on a machine with docker installed.

Prerequisites

talosctl

Download talosctl:

curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl

kubectl

Download kubectl via one of methods outlined in the documentation.

Create the Cluster

Now run the following:

talosctl cluster create

Verify that you can reach Kubernetes:

$ kubectl get nodes -o wide
NAME                     STATUS   ROLES    AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION   CONTAINER-RUNTIME
talos-default-master-1   Ready    master   115s   v1.20.1   10.5.0.2      <none>        Talos (v0.8.0)   <host kernel>    containerd://1.4.3
talos-default-worker-1   Ready    <none>   115s   v1.20.1   10.5.0.3      <none>        Talos (v0.8.0)   <host kernel>    containerd://1.4.3

Destroy the Cluster

When you are all done, remove the cluster:

talosctl cluster destroy

1.3 - Getting Started

Regardless of where you run Talos, you will find that there is a pattern to deploying it.

In general you will need to:

  • identity and create the image
  • optionally create a load balancer for Kubernetes
  • configure Talos
  • create the nodes

Kernel Parameters

The following is a list of kernel parameters required by Talos:

  • talos.config: the HTTP(S) URL at which the machine data can be found
  • talos.platform: can be one of aws, azure, container, digitalocean, gcp, metal, packet, or vmware
  • init_on_alloc=1: required by KSPP
  • init_on_free=1: required by KSPP
  • slab_nomerge: required by KSPP
  • pti=on: required by KSPP

CLI

Installation

curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl

Configuration

The talosctl command needs some configuration options to connect to the right node. By default talosctl looks for a file called config located at $HOME/.talos.

You can also override which configuration talosctl uses by specifying the --talosconfig parameter:

talosctl --talosconfig talosconfig

Configuring the endpoints:

talosctl config endpoint <endpoint>...

Endpoints are the communication endpoints to which the client directly talks. These can be load balancers, DNS hostnames, a list of IPs, etc. In general, it is recommended that these point to the set of control plane nodes, either directly or through a reverse proxy or load balancer.

Each endpoint will automatically proxy requests destined to another node through it, so it is not necessary to change the endpoint configuration just because you wish to talk to a different node within the cluster.

Endpoints do, however, need to be members of the same Talos cluster as the target node, because these proxied connections reply on certificate-based authentication.

Configuring the nodes:

talosctl config nodes <node>...

The node is the target node on which you wish to perform the API call. While you can configure the target node (or even set of target nodes) inside the ’talosctl’ configuration file, it is often useful to simply and explicitly declare the target node(s) using the -n or --nodes command-line parameter.

Keep in mind, when specifying nodes that their IPs and/or hostnames are as seen by the endpoint servers, not as from the client. This is because all connections are proxied first through the endpoints.

To verify what node(s) you’re currently talking to, you can run:

$ talosctl version
Client:
        ...
Server:
        NODE:        <node>
        ...

1.4 - System Requirements

Minimum Requirements

RoleMemoryCores
Init/Control Plane2GB2
Worker1GB1
RoleMemoryCores
Init/Control Plane4GB4
Worker2GB2

These requirements are similar to that of kubernetes.

2 - Bare Metal Platforms

2.1 - Digital Rebar

In this guide we will create an Kubernetes cluster with 1 worker node, and 2 controlplane nodes using an existing digital rebar deployment.

Prerequisites

Creating a Cluster

In this guide we will create an Kubernetes cluster with 1 worker node, and 2 controlplane nodes. We assume an existing digital rebar deployment, and some familiarity with iPXE.

We leave it up to the user to decide if they would like to use static networking, or DHCP. The setup and configuration of DHCP will not be covered.

Create the Machine Configuration Files

Generating Base Configurations

Using the DNS name of the load balancer, generate the base configuration files for the Talos machines:

$ talosctl gen config talos-k8s-metal-tutorial https://<load balancer IP or DNS>:<port>
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig

The loadbalancer is used to distribute the load across multiple controlplane nodes. This isn’t covered in detail, because we asume some loadbalancing knowledge before hand. If you think this should be added to the docs, please create a issue.

At this point, you can modify the generated configs to your liking.

Validate the Configuration Files

$ talosctl validate --config init.yaml --mode metal
init.yaml is valid for metal mode
$ talosctl validate --config controlplane.yaml --mode metal
controlplane.yaml is valid for metal mode
$ talosctl validate --config join.yaml --mode metal
join.yaml is valid for metal mode

Publishing the Machine Configuration Files

Digital Rebar has a build-in fileserver, which means we can use this feature to expose the talos configuration files. We will place init.yaml, controlplane.yaml, and worker.yaml into Digital Rebar file server by using the drpcli tools.

Copy the generated files from the step above into your Digital Rebar installation.

drpcli file upload <file>.yaml as <file>.yaml

Replacing <file> with init, controlplane or worker.

Download the boot files

Download a recent version of boot.tar.gz from github.

Upload to DRB:

$ drpcli isos upload boot.tar.gz as talos.tar.gz
{
  "Path": "talos.tar.gz",
  "Size": 96470072
}

We have some Digital Rebar example files in the Git repo you can use to provision Digital Rebar with drpcli.

To apply these configs you need to create them, and then apply them as follow:

$ drpcli bootenvs create talos
{
  "Available": true,
  "BootParams": "",
  "Bundle": "",
  "Description": "",
  "Documentation": "",
  "Endpoint": "",
  "Errors": [],
  "Initrds": [],
  "Kernel": "",
  "Meta": {},
  "Name": "talos",
  "OS": {
    "Codename": "",
    "Family": "",
    "IsoFile": "",
    "IsoSha256": "",
    "IsoUrl": "",
    "Name": "",
    "SupportedArchitectures": {},
    "Version": ""
  },
  "OnlyUnknown": false,
  "OptionalParams": [],
  "ReadOnly": false,
  "RequiredParams": [],
  "Templates": [],
  "Validated": true
}
drpcli bootenvs update talos - < bootenv.yaml

You need to do this for all files in the example directory. If you don’t have access to the drpcli tools you can also use the webinterface.

It’s important to have a corresponding SHA256 hash matching the boot.tar.gz

Bootenv BootParams

We’re using some of Digital Rebar build in templating to make sure the machine gets the correct role assigned.

talos.platform=metal talos.config={{ .ProvisionerURL }}/files/{{.Param \"talos/role\"}}.yaml"

This is why we also include a params.yaml in the example directory to make sure the role is set to one of the following:

  • controlplane
  • init
  • worker

The {{.Param \"talos/role\"}} then gets populated with one of the above roles.

Boot the Machines

In the UI of Digital Rebar you need to select the machines you want te provision. Once selected, you need to assign to following:

  • Profile
  • Workflow

This will provision the Stage and Bootenv with the talos values. Once this is done, you can boot the machine.

To understand the boot process, we have a higher level overview located at metal overview.

Retrieve the kubeconfig

Once everything is running we can retrieve the admin kubeconfig by running:

talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
talosctl --talosconfig talosconfig kubeconfig .

2.2 - Equinix Metal

Creating Talos cluster using Equinix Metal.

Prerequisites

This guide assumes the user has a working API token, the Equinix Metal CLI installed, and some familiarity with the CLI.

Network Booting

To install Talos to a server a working TFTP and iPXE server are needed. How this is done varies and is left as an exercise for the user. In general this requires a Talos kernel vmlinuz and initramfs. These assets can be downloaded from a given release.

Special Considerations

PXE Boot Kernel Parameters

The following is a list of kernel parameters required by Talos:

  • talos.platform: set this to packet
  • init_on_alloc=1: required by KSPP
  • init_on_free=1: required by KSPP
  • slab_nomerge: required by KSPP
  • pti=on: required by KSPP

User Data

To configure a Talos you can use the metadata service provide by Equinix Metal. It is required to add a shebang to the top of the configuration file. The shebang is arbitrary in the case of Talos, and the convention we use is #!talos.

Creating a Cluster via the Equinix Metal CLI

Control Plane Endpoint

The strategy used for an HA cluster varies and is left as an exercise for the user. Some of the known ways are:

  • DNS
  • Load Balancer
  • BGP

Create the Machine Configuration Files

Generating Base Configurations

Using the DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines:

$ talosctl gen config talos-k8s-aws-tutorial https://<load balancer IP or DNS>:<port>
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig

Now add the required shebang (e.g. #!talos) at the top of init.yaml, controlplane.yaml, and join.yaml At this point, you can modify the generated configs to your liking.

Validate the Configuration Files

talosctl validate --config init.yaml --mode metal
talosctl validate --config controlplane.yaml --mode metal
talosctl validate --config join.yaml --mode metal

Note: Validation of the install disk could potentially fail as the validation is performed on you local machine and the specified disk may not exist.

Create the Bootstrap Node

packet device create \
  --project-id $PROJECT_ID \
  --facility $FACILITY \
  --ipxe-script-url $PXE_SERVER \
  --operating-system "custom_ipxe" \
  --plan $PLAN\
  --hostname $HOSTNAME\
  --userdata-file init.yaml

Create the Remaining Control Plane Nodes

packet device create \
  --project-id $PROJECT_ID \
  --facility $FACILITY \
  --ipxe-script-url $PXE_SERVER \
  --operating-system "custom_ipxe" \
  --plan $PLAN\
  --hostname $HOSTNAME\
  --userdata-file controlplane.yaml

Note: The above should be invoked at least twice in order for etcd to form quorum.

Create the Worker Nodes

packet device create \
  --project-id $PROJECT_ID \
  --facility $FACILITY \
  --ipxe-script-url $PXE_SERVER \
  --operating-system "custom_ipxe" \
  --plan $PLAN\
  --hostname $HOSTNAME\
  --userdata-file join.yaml

Retrieve the kubeconfig

At this point we can retrieve the admin kubeconfig by running:

talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
talosctl --talosconfig talosconfig kubeconfig .

2.3 - Matchbox

In this guide we will create an HA Kubernetes cluster with 3 worker nodes using an existing load balancer and matchbox deployment.

Creating a Cluster

In this guide we will create an HA Kubernetes cluster with 3 worker nodes. We assume an existing load balancer, matchbox deployment, and some familiarity with iPXE.

We leave it up to the user to decide if they would like to use static networking, or DHCP. The setup and configuration of DHCP will not be covered.

Create the Machine Configuration Files

Generating Base Configurations

Using the DNS name of the load balancer, generate the base configuration files for the Talos machines:

$ talosctl gen config talos-k8s-metal-tutorial https://<load balancer IP or DNS>:<port>
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig

At this point, you can modify the generated configs to your liking.

Validate the Configuration Files

$ talosctl validate --config init.yaml --mode metal
init.yaml is valid for metal mode
$ talosctl validate --config controlplane.yaml --mode metal
controlplane.yaml is valid for metal mode
$ talosctl validate --config join.yaml --mode metal
join.yaml is valid for metal mode

Publishing the Machine Configuration Files

In bare-metal setups it is up to the user to provide the configuration files over HTTP(S). A special kernel parameter (talos.config) must be used to inform Talos about where it should retreive its’ configuration file. To keep things simple we will place init.yaml, controlplane.yaml, and join.yaml into Matchbox’s assets directory. This directory is automatically served by Matchbox.

Create the Matchbox Configuration Files

The profiles we will create will reference vmlinuz, and initramfs.xz. Download these files from the release of your choice, and place them in /var/lib/matchbox/assets.

Profiles

The Bootstrap Node
{
  "id": "init",
  "name": "init",
  "boot": {
    "kernel": "/assets/vmlinuz",
    "initrd": ["/assets/initramfs.xz"],
    "args": [
      "initrd=initramfs.xz",
      "init_on_alloc=1",
      "init_on_free=1",
      "slab_nomerge",
      "pti=on",
      "console=tty0",
      "console=ttyS0",
      "printk.devkmsg=on",
      "talos.platform=metal",
      "talos.config=http://matchbox.talos.dev/assets/init.yaml"
    ]
  }
}

Note: Be sure to change http://matchbox.talos.dev to the endpoint of your matchbox server.

Additional Control Plane Nodes
{
  "id": "control-plane",
  "name": "control-plane",
  "boot": {
    "kernel": "/assets/vmlinuz",
    "initrd": ["/assets/initramfs.xz"],
    "args": [
      "initrd=initramfs.xz",
      "init_on_alloc=1",
      "init_on_free=1",
      "slab_nomerge",
      "pti=on",
      "console=tty0",
      "console=ttyS0",
      "printk.devkmsg=on",
      "talos.platform=metal",
      "talos.config=http://matchbox.talos.dev/assets/controlplane.yaml"
    ]
  }
}
Worker Nodes
{
  "id": "default",
  "name": "default",
  "boot": {
    "kernel": "/assets/vmlinuz",
    "initrd": ["/assets/initramfs.xz"],
    "args": [
      "initrd=initramfs.xz",
      "init_on_alloc=1",
      "init_on_free=1",
      "slab_nomerge",
      "pti=on",
      "console=tty0",
      "console=ttyS0",
      "printk.devkmsg=on",
      "talos.platform=metal",
      "talos.config=http://matchbox.talos.dev/assets/join.yaml"
    ]
  }
}

Groups

Now, create the following groups, and ensure that the selectors are accurate for your specific setup.

{
  "id": "control-plane-1",
  "name": "control-plane-1",
  "profile": "init",
  "selector": {
    ...
  }
}
{
  "id": "control-plane-2",
  "name": "control-plane-2",
  "profile": "control-plane",
  "selector": {
    ...
  }
}
{
  "id": "control-plane-3",
  "name": "control-plane-3",
  "profile": "control-plane",
  "selector": {
    ...
  }
}
{
  "id": "default",
  "name": "default",
  "profile": "default"
}

Boot the Machines

Now that we have our configuraton files in place, boot all the machines. Talos will come up on each machine, grab its’ configuration file, and bootstrap itself.

Retrieve the kubeconfig

At this point we can retrieve the admin kubeconfig by running:

talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
talosctl --talosconfig talosconfig kubeconfig .

2.4 - Sidero

Sidero is a project created by the Talos team that has native support for Talos.

Sidero is a project created by the Talos team that has native support for Talos. The best way to get started with Sidero is to visit the website.

3 - Virtualized Platforms

3.1 - Hyper-V

Talos is known to work on Hyper-V; however, it is currently undocumented.

3.2 - KVM

Talos is known to work on KVM; however, it is currently undocumented.

3.3 - Proxmox

Creating Talos Kubernetes cluster using Proxmox.

In this guide we will create a Kubernetes cluster using Proxmox.

Video Walkthrough

To see a live demo of this writeup, visit Youtube here:

Installation

How to Get Proxmox

It is assumed that you have already installed Proxmox onto the server you wish to create Talos VMs on. Visit the Proxmox downloads page if necessary.

Install talosctl

You can download talosctl via github.com/talos-systems/talos/releases

curl https://github.com/siderolabs/talos/releases/download/<version>/talosctl-<platform>-<arch> -L -o talosctl

For example version v0.8.0 for linux platform:

curl https://github.com/siderolabs/talos/releases/download/v0.8.0/talosctl-linux-amd64 -L -o talosctl
sudo cp talosctl /usr/local/bin
sudo chmod +x /usr/local/bin/talosctl

Download ISO Image

In order to install Talos in Proxmox, you will need the ISO image from the Talos release page. You can download talos-amd64.iso via github.com/talos-systems/talos/releases

mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/<version>/talos-<arch>.iso -L -o _out/talos-<arch>.iso

For example version v0.8.0 for linux platform:

mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/v0.8.0/talos-amd64.iso -L -o _out/talos-amd64.iso

Upload ISO

From the Proxmox UI, select the “local” storage and enter the “Content” section. Click the “Upload” button:

Select the ISO you downloaded previously, then hit “Upload”

Create VMs

Start by creating a new VM by clicking the “Create VM” button in the Proxmox UI:

Fill out a name for the new VM:

In the OS tab, select the ISO we uploaded earlier:

Keep the defaults set in the “System” tab.

Keep the defaults in the “Hard Disk” tab as well, only changing the size if desired.

In the “CPU” section, give at least 2 cores to the VM:

Verify that the RAM is set to at least 2GB:

Keep the default values for networking, verifying that the VM is set to come up on the bridge interface:

Finish creating the VM by clicking through the “Confirm” tab and then “Finish”.

Repeat this process for a second VM to use as a worker node. You can also repeat this for additional nodes desired.

Start Control Plane Node

Once the VMs have been created and updated, start the VM that will be the first control plane node. This VM will boot the ISO image specified earlier and enter “maintenance mode”. Once the machine has entered maintenance mode, there will be a console log that details the IP address that the node received. Take note of this IP address, which will be referred to as $CONTROL_PLANE_IP for the rest of this guide. If you wish to export this IP as a bash variable, simply issue a command like export CONTROL_PLANE_IP=1.2.3.4.

Generate Machine Configurations

With the IP address above, you can now generate the machine configurations to use for installing Talos and Kubernetes. Issue the following command, updating the output directory, cluster name, and control plane IP as you see fit:

talosctl gen config talos-vbox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out

This will create several files in the _out directory: init.yaml, controlplane.yaml, join.yaml, and talosconfig.

Create Control Plane Node

Using the init.yaml generated above, you can now apply this config using talosctl. Issue:

talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file _out/init.yaml

You should now see some action in the Proxmox console for this VM. Talos will be installed to disk, the VM will reboot, and then Talos will configure the Kubernetes control plane on this VM.

Note: This process can be repeated multiple times to create an HA control plane. Simply apply controlplane.yaml instead of init.yaml for subsequent nodes.

Create Worker Node

Create at least a single worker node using a process similar to the control plane creation above. Start the worker node VM and wait for it to enter “maintenance mode”. Take note of the worker node’s IP address, which will be referred to as $WORKER_IP

Issue:

talosctl apply-config --insecure --nodes $WORKER_IP --file _out/join.yaml

Note: This process can be repeated multiple times to add additional workers.

Using the Cluster

Once the cluster is available, you can make use of talosctl and kubectl to interact with the cluster. For example, to view current running containers, run talosctl containers for a list of containers in the system namespace, or talosctl containers -k for the k8s.io namespace. To view the logs of a container, use talosctl logs <container> or talosctl logs -k <container>.

First, configure talosctl to talk to your control plane node by issuing the following, updating paths and IPs as necessary:

export TALOSCONFIG="_out/talosconfig"
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP

Retrieve and Configure the kubeconfig

Fetch the kubeconfig file from the control plane node by issuing:

talosctl kubeconfig

You can then use kubectl in this fashion:

kubectl get nodes

Cleaning Up

To cleanup, simply stop and delete the virtual machines from the Proxmox UI.

3.4 - VMware

Creating Talos Kubernetes cluster using VMware.

Creating a Cluster via the govc CLI

In this guide we will create an HA Kubernetes cluster with 3 worker nodes. We will use the govc cli which can be downloaded here.

Prerequisites

Prior to starting, it is important to have the following infrastructure in place and available:

  • DHCP server
  • Load Balancer or DNS address for cluster endpoint
    • If using a load balancer, the most common setup is to balance tcp/443 across the control plane nodes tcp/6443
    • If using a DNS address, the A record should return back the addresses of the control plane nodes

Create the Machine Configuration Files

Generating Base Configurations

Using the DNS name or name of the loadbalancer used in the prereq steps, generate the base configuration files for the Talos machines:

$ talosctl gen config talos-k8s-vmware-tutorial https://<load balancer IP or DNS>:<port>
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig
$ talosctl gen config talos-k8s-vmware-tutorial https://<DNS name>:6443
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig

At this point, you can modify the generated configs to your liking.

Validate the Configuration Files

$ talosctl validate --config init.yaml --mode cloud
init.yaml is valid for cloud mode
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config join.yaml --mode cloud
join.yaml is valid for cloud mode

Set Environment Variables

govc makes use of the following environment variables

export GOVC_URL=<vCenter url>
export GOVC_USERNAME=<vCenter username>
export GOVC_PASSWORD=<vCenter password>

Note: If your vCenter installation makes use of self signed certificates, you’ll want to export GOVC_INSECURE=true.

There are some additional variables that you may need to set:

export GOVC_DATACENTER=<vCenter datacenter>
export GOVC_RESOURCE_POOL=<vCenter resource pool>
export GOVC_DATASTORE=<vCenter datastore>
export GOVC_NETWORK=<vCenter network>

Download the OVA

A talos.ova asset is published with each release. We will refer to the version of the release as $TALOS_VERSION below. It can be easily exported with export TALOS_VERSION="v0.3.0-alpha.10" or similar.

curl -LO https://github.com/siderolabs/talos/releases/download/$TALOS_VERSION/talos.ova

Import the OVA into vCenter

We’ll need to repeat this step for each Talos node we want to create. In a typical HA setup, we’ll have 3 control plane nodes and N workers. In the following example, we’ll setup a HA control plane with two worker nodes.

govc import.ova -name talos-$TALOS_VERSION /path/to/downloaded/talos.ova

Create the Bootstrap Node

We’ll clone the OVA to create the bootstrap node (our first control plane node).

govc vm.clone -on=false -vm talos-$TALOS_VERSION control-plane-1

Talos makes use of the guestinfo facility of VMware to provide the machine/cluster configuration. This can be set using the govc vm.change command. To facilitate persistent storage using the vSphere cloud provider integration with Kubernetes, disk.enableUUID=1 is used.

govc vm.change \
  -e "guestinfo.talos.config=$(cat init.yaml | base64)" \
  -e "disk.enableUUID=1" \
  -vm /ha-datacenter/vm/control-plane-1

Update Hardware Resources for the Bootstrap Node

  • -c is used to configure the number of cpus
  • -m is used to configure the amount of memory (in MB)
govc vm.change \
  -c 2 \
  -m 4096 \
  -vm /ha-datacenter/vm/control-plane-1

The following can be used to adjust the ephemeral disk size.

govc vm.disk.change -vm control-plane-1 -disk.name disk-1000-0 -size 10G
govc vm.power -on control-plane-1

Create the Remaining Control Plane Nodes

govc vm.clone -on=false -vm talos-$TALOS_VERSION control-plane-2
govc vm.change \
  -e "guestinfo.talos.config=$(base64 controlplane.yaml)" \
  -e "disk.enableUUID=1" \
  -vm /ha-datacenter/vm/control-plane-2
govc vm.clone -on=false -vm talos-$TALOS_VERSION control-plane-3
govc vm.change \
  -e "guestinfo.talos.config=$(base64 controlplane.yaml)" \
  -e "disk.enableUUID=1" \
  -vm /ha-datacenter/vm/control-plane-3
govc vm.change \
  -c 2 \
  -m 4096 \
  -vm /ha-datacenter/vm/control-plane-2
govc vm.change \
  -c 2 \
  -m 4096 \
  -vm /ha-datacenter/vm/control-plane-3
govc vm.disk.change -vm control-plane-2 -disk.name disk-1000-0 -size 10G
govc vm.disk.change -vm control-plane-3 -disk.name disk-1000-0 -size 10G
govc vm.power -on control-plane-2
govc vm.power -on control-plane-3

Update Settings for the Worker Nodes

govc vm.clone -on=false -vm talos-$TALOS_VERSION worker-1
govc vm.change \
  -e "guestinfo.talos.config=$(base64 join.yaml)" \
  -e "disk.enableUUID=1" \
  -vm /ha-datacenter/vm/worker-1
govc vm.clone -on=false -vm talos-$TALOS_VERSION worker-2
govc vm.change \
  -e "guestinfo.talos.config=$(base64 join.yaml)" \
  -e "disk.enableUUID=1" \
  -vm /ha-datacenter/vm/worker-2
govc vm.change \
  -c 4 \
  -m 8192 \
  -vm /ha-datacenter/vm/worker-1
govc vm.change \
  -c 4 \
  -m 8192 \
  -vm /ha-datacenter/vm/worker-2
govc vm.disk.change -vm worker-1 -disk.name disk-1000-0 -size 50G
govc vm.disk.change -vm worker-2 -disk.name disk-1000-0 -size 50G
govc vm.power -on worker-1
govc vm.power -on worker-2

Retrieve the kubeconfig

At this point we can retrieve the admin kubeconfig by running:

talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
talosctl --talosconfig talosconfig kubeconfig .

3.5 - Xen

Talos is known to work on Xen; however, it is currently undocumented.

4 - Cloud Platforms

4.1 - AWS

Creating a cluster via the AWS CLI.

Creating a Cluster via the AWS CLI

In this guide we will create an HA Kubernetes cluster with 3 worker nodes. We assume an existing VPC, and some familiarity with AWS. If you need more information on AWS specifics, please see the official AWS documentation.

Create the Subnet

aws ec2 create-subnet \
    --region $REGION \
    --vpc-id $VPC \
    --cidr-block ${CIDR_BLOCK}

Create the AMI

Prepare the Import Prerequisites

Create the S3 Bucket
aws s3api create-bucket \
    --bucket $BUCKET \
    --create-bucket-configuration LocationConstraint=$REGION \
    --acl private
Create the vmimport Role

In order to create an AMI, ensure that the vmimport role exists as described in the official AWS documentation.

Note that the role should be associated with the S3 bucket we created above.

Create the Image Snapshot

First, download the AWS image from a Talos release:

curl -LO https://github.com/talos-systems/talos/releases/latest/download/aws-amd64.tar.gz | tar -xv

Copy the RAW disk to S3 and import it as a snapshot:

aws s3 cp disk.raw s3://$BUCKET/talos-aws-tutorial.raw
aws ec2 import-snapshot \
    --region $REGION \
    --description "Talos kubernetes tutorial" \
    --disk-container "Format=raw,UserBucket={S3Bucket=$BUCKET,S3Key=talos-aws-tutorial.raw}"

Save the SnapshotId, as we will need it once the import is done. To check on the status of the import, run:

aws ec2 describe-import-snapshot-tasks \
    --region $REGION \
    --import-task-ids

Once the SnapshotTaskDetail.Status indicates completed, we can register the image.

Register the Image
aws ec2 register-image \
    --region $REGION \
    --block-device-mappings "DeviceName=/dev/xvda,VirtualName=talos,Ebs={DeleteOnTermination=true,SnapshotId=$SNAPSHOT,VolumeSize=4,VolumeType=gp2}" \
    --root-device-name /dev/xvda \
    --virtualization-type hvm \
    --architecture x86_64 \
    --ena-support \
    --name talos-aws-tutorial-ami

We now have an AMI we can use to create our cluster. Save the AMI ID, as we will need it when we create EC2 instances.

Create a Security Group

aws ec2 create-security-group \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --description "Security Group for EC2 instances to allow ports required by Talos"

Using the security group ID from above, allow all internal traffic within the same security group:

aws ec2 authorize-security-group-ingress \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --protocol all \
    --port 0 \
    --source-group $SECURITY_GROUP

and expose the Talos and Kubernetes APIs:

aws ec2 authorize-security-group-ingress \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --protocol tcp \
    --port 6443 \
    --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --protocol tcp \
    --port 50000-50001 \
    --cidr 0.0.0.0/0

Create a Load Balancer

aws elbv2 create-load-balancer \
    --region $REGION \
    --name talos-aws-tutorial-lb \
    --type network --subnets $SUBNET

Take note of the DNS name and ARN. We will need these soon.

Create the Machine Configuration Files

Generating Base Configurations

Using the DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines:

$ talosctl gen config talos-k8s-aws-tutorial https://<load balancer IP or DNS>:<port>
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig

Take note that in this version of Talos, the generated configs are too long for AWS userdata field. Comments can be removed to workaround this with a sed command like:

cat init.yaml | sed 's/ #.//' > temp.yaml; mv temp.yaml init.yaml

cat controlplane.yaml | sed 's/ #.//' > temp.yaml; mv temp.yaml controlplane.yaml

At this point, you can modify the generated configs to your liking.

Validate the Configuration Files

$ talosctl validate --config init.yaml --mode cloud
init.yaml is valid for cloud mode
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config join.yaml --mode cloud
join.yaml is valid for cloud mode

Create the EC2 Instances

Note: There is a known issue that prevents Talos from running on T2 instance types. Please use T3 if you need burstable instance types.

Create the Bootstrap Node

aws ec2 run-instances \
    --region $REGION \
    --image-id $AMI \
    --count 1 \
    --instance-type t3.small \
    --user-data file://init.yaml \
    --subnet-id $SUBNET \
    --security-group-ids $SECURITY_GROUP \
    --associate-public-ip-address \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-cp-0}]"

Create the Remaining Control Plane Nodes

CP_COUNT=1
while [[ "$CP_COUNT" -lt 3 ]]; do
  aws ec2 run-instances \
    --region $REGION \
    --image-id $AMI \
    --count 1 \
    --instance-type t3.small \
    --user-data file://controlplane.yaml \
    --subnet-id $SUBNET \
    --security-group-ids $SECURITY_GROUP \
    --associate-public-ip-address \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-cp-$CP_COUNT}]"
  ((CP_COUNT++))
done

Make a note of the resulting PrivateIpAddress from the init and controlplane nodes for later use.

Create the Worker Nodes

aws ec2 run-instances \
    --region $REGION \
    --image-id $AMI \
    --count 3 \
    --instance-type t3.small \
    --user-data file://join.yaml \
    --subnet-id $SUBNET \
    --security-group-ids $SECURITY_GROUP
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-worker}]"

Configure the Load Balancer

aws elbv2 create-target-group \
    --region $REGION \
    --name talos-aws-tutorial-tg \
    --protocol TCP \
    --port 6443 \
    --target-type ip \
    --vpc-id $VPC

Now, using the target group’s ARN, and the PrivateIpAddress from the instances that you created :

aws elbv2 register-targets \
    --region $REGION \
    --target-group-arn $TARGET_GROUP_ARN \
    --targets Id=$CP_NODE_1_IP  Id=$CP_NODE_2_IP  Id=$CP_NODE_3_IP

Using the ARNs of the load balancer and target group from previous steps, create the listener:

aws elbv2 create-listener \
    --region $REGION \
    --load-balancer-arn $LOAD_BALANCER_ARN \
    --protocol TCP \
    --port 443 \
    --default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_ARN

Retrieve the kubeconfig

At this point we can retrieve the admin kubeconfig by running:

talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
talosctl --talosconfig talosconfig kubeconfig .

4.2 - Azure

Creating a cluster via the CLI on Azure.

Creating a Cluster via the CLI

In this guide we will create an HA Kubernetes cluster with 1 worker node. We assume existing Blob Storage, and some familiarity with Azure. If you need more information on Azure specifics, please see the official Azure documentation.

Environment Setup

We’ll make use of the following environment variables throughout the setup. Edit the variables below with your correct information.

# Storage account to use
export STORAGE_ACCOUNT="StorageAccountName"

# Storage container to upload to
export STORAGE_CONTAINER="StorageContainerName"

# Resource group name
export GROUP="ResourceGroupName"

# Location
export LOCATION="centralus"

# Get storage account connection string based on info above
export CONNECTION=$(az storage account show-connection-string \
                    -n $STORAGE_ACCOUNT \
                    -g $GROUP \
                    -o tsv)

Create the Image

First, download the Azure image from a Talos release. Once downloaded, untar with tar -xvf /path/to/azure-amd64.tar.gz

Upload the VHD

Once you have pulled down the image, you can upload it to blob storage with:

az storage blob upload \
  --connection-string $CONNECTION \
  --container-name $STORAGE_CONTAINER \
  -f /path/to/extracted/talos-azure.vhd \
  -n talos-azure.vhd

Register the Image

Now that the image is present in our blob storage, we’ll register it.

az image create \
  --name talos \
  --source https://$STORAGE_ACCOUNT.blob.core.windows.net/$STORAGE_CONTAINER/talos-azure.vhd \
  --os-type linux \
  -g $GROUP

Network Infrastructure

Virtual Networks and Security Groups

Once the image is prepared, we’ll want to work through setting up the network. Issue the following to create a network security group and add rules to it.

# Create vnet
az network vnet create \
  --resource-group $GROUP \
  --location $LOCATION \
  --name talos-vnet \
  --subnet-name talos-subnet

# Create network security group
az network nsg create -g $GROUP -n talos-sg

# Client -> apid
az network nsg rule create \
  -g $GROUP \
  --nsg-name talos-sg \
  -n apid \
  --priority 1001 \
  --destination-port-ranges 50000 \
  --direction inbound

# Trustd
az network nsg rule create \
  -g $GROUP \
  --nsg-name talos-sg \
  -n trustd \
  --priority 1002 \
  --destination-port-ranges 50001 \
  --direction inbound

# etcd
az network nsg rule create \
  -g $GROUP \
  --nsg-name talos-sg \
  -n etcd \
  --priority 1003 \
  --destination-port-ranges 2379-2380 \
  --direction inbound

# Kubernetes API Server
az network nsg rule create \
  -g $GROUP \
  --nsg-name talos-sg \
  -n kube \
  --priority 1004 \
  --destination-port-ranges 6443 \
  --direction inbound

Load Balancer

We will create a public ip, load balancer, and a health check that we will use for our control plane.

# Create public ip
az network public-ip create \
  --resource-group $GROUP \
  --name talos-public-ip \
  --allocation-method static

# Create lb
az network lb create \
  --resource-group $GROUP \
  --name talos-lb \
  --public-ip-address talos-public-ip \
  --frontend-ip-name talos-fe \
  --backend-pool-name talos-be-pool

# Create health check
az network lb probe create \
  --resource-group $GROUP \
  --lb-name talos-lb \
  --name talos-lb-health \
  --protocol tcp \
  --port 6443

# Create lb rule for 6443
az network lb rule create \
  --resource-group $GROUP \
  --lb-name talos-lb \
  --name talos-6443 \
  --protocol tcp \
  --frontend-ip-name talos-fe \
  --frontend-port 6443 \
  --backend-pool-name talos-be-pool \
  --backend-port 6443 \
  --probe-name talos-lb-health

Network Interfaces

In Azure, we have to pre-create the NICs for our control plane so that they can be associated with our load balancer.

for i in $( seq 0 1 2 ); do
  # Create public IP for each nic
  az network public-ip create \
    --resource-group $GROUP \
    --name talos-controlplane-public-ip-$i \
    --allocation-method static


  # Create nic
  az network nic create \
    --resource-group $GROUP \
    --name talos-controlplane-nic-$i \
    --vnet-name talos-vnet \
    --subnet talos-subnet \
    --network-security-group talos-sg \
    --public-ip-address talos-controlplane-public-ip-$i\
    --lb-name talos-lb \
    --lb-address-pools talos-be-pool
done

Cluster Configuration

With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.

LB_PUBLIC_IP=$(az network public-ip show \
              --resource-group $GROUP \
              --name talos-public-ip \
              --query [ipAddress] \
              --output tsv)

talosctl gen config talos-k8s-azure-tutorial https://${LB_PUBLIC_IP}:6443

Compute Creation

We are now ready to create our azure nodes.

# Create availability set
az vm availability-set create \
  --name talos-controlplane-av-set \
  -g $GROUP

# Create controlplane 0
az vm create \
  --name talos-controlplane-0 \
  --image talos \
  --custom-data ./init.yaml \
  -g $GROUP \
  --admin-username talos \
  --generate-ssh-keys \
  --verbose \
  --boot-diagnostics-storage $STORAGE_ACCOUNT \
  --os-disk-size-gb 20 \
  --nics talos-controlplane-nic-0 \
  --availability-set talos-controlplane-av-set \
  --no-wait

# Create 2 more controlplane nodes
for i in $( seq 1 2 ); do
  az vm create \
    --name talos-controlplane-$i \
    --image talos \
    --custom-data ./controlplane.yaml \
    -g $GROUP \
    --admin-username talos \
    --generate-ssh-keys \
    --verbose \
    --boot-diagnostics-storage $STORAGE_ACCOUNT \
    --os-disk-size-gb 20 \
    --nics talos-controlplane-nic-$i \
    --availability-set talos-controlplane-av-set \
    --no-wait
done

# Create worker node
  az vm create \
    --name talos-worker-0 \
    --image talos \
    --vnet-name talos-vnet \
    --subnet talos-subnet \
    --custom-data ./join.yaml \
    -g $GROUP \
    --admin-username talos \
    --generate-ssh-keys \
    --verbose \
    --boot-diagnostics-storage $STORAGE_ACCOUNT \
    --nsg talos-sg \
    --os-disk-size-gb 20 \
    --no-wait

# NOTES:
# `--admin-username` and `--generate-ssh-keys` are required by the az cli,
# but are not actually used by talos
# `--os-disk-size-gb` is the backing disk for Kubernetes and any workload containers
# `--boot-diagnostics-storage` is to enable console output which may be necessary
# for troubleshooting

Retrieve the kubeconfig

You should now be able to interact with your cluster with talosctl. We will need to discover the public IP for our first control plane node first.

CONTROL_PLANE_0_IP=$(az network public-ip show \
                    --resource-group $GROUP \
                    --name talos-controlplane-public-ip-0 \
                    --query [ipAddress] \
                    --output tsv)
talosctl --talosconfig ./talosconfig config endpoint $CONTROL_PLANE_0_IP
talosctl --talosconfig ./talosconfig config node $CONTROL_PLANE_0_IP
talosctl --talosconfig ./talosconfig kubeconfig .
kubectl --kubeconfig ./kubeconfig get nodes

4.3 - DigitalOcean

Creating a cluster via the CLI on DigitalOcean.

Creating a Cluster via the CLI

In this guide we will create an HA Kubernetes cluster with 1 worker node. We assume an existing Space, and some familiarity with DigitalOcean. If you need more information on DigitalOcean specifics, please see the official DigitalOcean documentation.

Create the Image

First, download the DigitalOcean image from a Talos release. Extract the archive to get the disk.raw file, compress it using gzip to disk.raw.gz.

Using an upload method of your choice (doctl does not have Spaces support), upload the image to a space. Now, create an image using the URL of the uploaded image:

doctl compute image create \
    --region $REGION \
    --image-description talos-digital-ocean-tutorial \
    --image-url https://talos-tutorial.$REGION.digitaloceanspaces.com/disk.raw.gz \
    Talos

Save the image ID. We will need it when creating droplets.

Create a Load Balancer

doctl compute load-balancer create \
    --region $REGION \
    --name talos-digital-ocean-tutorial-lb \
    --tag-name talos-digital-ocean-tutorial-control-plane \
    --health-check protocol:tcp,port:6443,check_interval_seconds:10,response_timeout_seconds:5,healthy_threshold:5,unhealthy_threshold:3 \
    --forwarding-rules entry_protocol:tcp,entry_port:443,target_protocol:tcp,target_port:6443

We will need the IP of the load balancer. Using the ID of the load balancer, run:

doctl compute load-balancer get --format IP <load balancer ID>

Save it, as we will need it in the next step.

Create the Machine Configuration Files

Generating Base Configurations

Using the DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines:

$ talosctl gen config talos-k8s-digital-ocean-tutorial https://<load balancer IP or DNS>:<port>
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig

At this point, you can modify the generated configs to your liking.

Validate the Configuration Files

$ talosctl validate --config init.yaml --mode cloud
init.yaml is valid for cloud mode
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config join.yaml --mode cloud
join.yaml is valid for cloud mode

Create the Droplets

Create the Bootstrap Node

doctl compute droplet create \
    --region $REGION \
    --image <image ID> \
    --size s-2vcpu-4gb \
    --enable-private-networking \
    --tag-names talos-digital-ocean-tutorial-control-plane \
    --user-data-file init.yaml \
    --ssh-keys <ssh key fingerprint> \
    talos-control-plane-1

Note: Although SSH is not used by Talos, DigitalOcean still requires that an SSH key be associated with the droplet. Create a dummy key that can be used to satisfy this requirement.

Create the Remaining Control Plane Nodes

Run the following twice, to give ourselves three total control plane nodes:

doctl compute droplet create \
    --region $REGION \
    --image <image ID> \
    --size s-2vcpu-4gb \
    --enable-private-networking \
    --tag-names talos-digital-ocean-tutorial-control-plane \
    --user-data-file controlplane.yaml \
    --ssh-keys <ssh key fingerprint> \
    talos-control-plane-2
doctl compute droplet create \
    --region $REGION \
    --image <image ID> \
    --size s-2vcpu-4gb \
    --enable-private-networking \
    --tag-names talos-digital-ocean-tutorial-control-plane \
    --user-data-file controlplane.yaml \
    --ssh-keys <ssh key fingerprint> \
    talos-control-plane-3

Create the Worker Nodes

Run the following to create a worker node:

doctl compute droplet create \
    --region $REGION \
    --image <image ID> \
    --size s-2vcpu-4gb \
    --enable-private-networking \
    --user-data-file join.yaml \
    --ssh-keys <ssh key fingerprint> \
    talos-worker-1

Retrieve the kubeconfig

To configure talosctl we will need the first control plane node’s IP:

doctl compute droplet get --format PublicIPv4 <droplet ID>

At this point we can retrieve the admin kubeconfig by running:

talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
talosctl --talosconfig talosconfig kubeconfig .

4.4 - GCP

Creating a cluster via the CLI on Google Cloud Platform.

Creating a Cluster via the CLI

In this guide, we will create an HA Kubernetes cluster in GCP with 1 worker node. We will assume an existing Cloud Storage bucket, and some familiarity with Google Cloud. If you need more information on Google Cloud specifics, please see the official Google documentation.

Environment Setup

We’ll make use of the following environment variables throughout the setup. Edit the variables below with your correct information.

# Storage account to use
export STORAGE_BUCKET="StorageBucketName"
# Region
export REGION="us-central1"

Create the Image

First, download the Google Cloud image from a Talos release. These images are called gcp-$ARCH.tar.gz.

Upload the Image

Once you have downloaded the image, you can upload it to your storage bucket with:

gsutil cp /path/to/gcp-amd64.tar.gz gs://$STORAGE_BUCKET

Register the image

Now that the image is present in our bucket, we’ll register it.

gcloud compute images create talos \
 --source-uri=gs://$STORAGE_BUCKET/gcp-amd64.tar.gz \
 --guest-os-features=VIRTIO_SCSI_MULTIQUEUE

Network Infrastructure

Load Balancers and Firewalls

Once the image is prepared, we’ll want to work through setting up the network. Issue the following to create a firewall, load balancer, and their required components.

# Create Instance Group
gcloud compute instance-groups unmanaged create talos-ig \
  --zone $REGION-b

# Create port for IG
gcloud compute instance-groups set-named-ports talos-ig \
    --named-ports tcp6443:6443 \
    --zone $REGION-b

# Create health check
gcloud compute health-checks create tcp talos-health-check --port 6443

# Create backend
gcloud compute backend-services create talos-be \
    --global \
    --protocol TCP \
    --health-checks talos-health-check \
    --timeout 5m \
    --port-name tcp6443

# Add instance group to backend
gcloud compute backend-services add-backend talos-be \
    --global \
    --instance-group talos-ig \
    --instance-group-zone $REGION-b

# Create tcp proxy
gcloud compute target-tcp-proxies create talos-tcp-proxy \
    --backend-service talos-be \
    --proxy-header NONE

# Create LB IP
gcloud compute addresses create talos-lb-ip --global

# Forward 443 from LB IP to tcp proxy
gcloud compute forwarding-rules create talos-fwd-rule \
    --global \
    --ports 443 \
    --address talos-lb-ip \
    --target-tcp-proxy talos-tcp-proxy

# Create firewall rule for health checks
gcloud compute firewall-rules create talos-controlplane-firewall \
     --source-ranges 130.211.0.0/22,35.191.0.0/16 \
     --target-tags talos-controlplane \
     --allow tcp:6443

# Create firewall rule to allow talosctl access
gcloud compute firewall-rules create talos-controlplane-talosctl \
  --source-ranges 0.0.0.0/0 \
  --target-tags talos-controlplane \
  --allow tcp:50000

Cluster Configuration

With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.

LB_PUBLIC_IP=$(gcloud compute forwarding-rules describe talos-fwd-rule \
               --global \
               --format json \
               | jq -r .IPAddress)

talosctl gen config talos-k8s-gcp-tutorial https://${LB_PUBLIC_IP}:443

Compute Creation

We are now ready to create our GCP nodes.

# Create control plane 0
gcloud compute instances create talos-controlplane-0 \
  --image talos \
  --zone $REGION-b \
  --tags talos-controlplane \
  --boot-disk-size 20GB \
  --metadata-from-file=user-data=./init.yaml

# Create control plane 1/2
for i in $( seq 1 2 ); do
  gcloud compute instances create talos-controlplane-$i \
    --image talos \
    --zone $REGION-b \
    --tags talos-controlplane \
    --boot-disk-size 20GB \
    --metadata-from-file=user-data=./controlplane.yaml
done

# Add control plane nodes to instance group
for i in $( seq 0 1 2 ); do
  gcloud compute instance-groups unmanaged add-instances talos-ig \
      --zone $REGION-b \
      --instances talos-controlplane-$i
done

# Create worker
gcloud compute instances create talos-worker-0 \
  --image talos \
  --zone $REGION-b \
  --boot-disk-size 20GB \
  --metadata-from-file=user-data=./join.yaml

Retrieve the kubeconfig

You should now be able to interact with your cluster with talosctl. We will need to discover the public IP for our first control plane node first.

CONTROL_PLANE_0_IP=$(gcloud compute instances describe talos-controlplane-0 \
                     --zone $REGION-b \
                     --format json \
                     | jq -r '.networkInterfaces[0].accessConfigs[0].natIP')

talosctl --talosconfig ./talosconfig config endpoint $CONTROL_PLANE_0_IP
talosctl --talosconfig ./talosconfig config node $CONTROL_PLANE_0_IP
talosctl --talosconfig ./talosconfig kubeconfig .
kubectl --kubeconfig ./kubeconfig get nodes

4.5 - OpenStack

Creating a cluster via the CLI on OpenStack.

Creating a Cluster via the CLI

In this guide, we will create an HA Kubernetes cluster in OpenStack with 1 worker node. We will assume an existing some familiarity with OpenStack. If you need more information on OpenStack specifics, please see the official OpenStack documentation.

Environment Setup

You should have an existing openrc file. This file will provide environment variables necessary to talk to your OpenStack cloud. See here for instructions on fetching this file.

Create the Image

First, download the OpenStack image from a Talos release. These images are called openstack-$ARCH.tar.gz. Untar this file with tar -xvf openstack-$ARCH.tar.gz. The resulting file will be called disk.raw.

Upload the Image

Once you have the image, you can upload to OpenStack with:

openstack image create --public --disk-format raw --file disk.raw talos

Network Infrastructure

Load Balancer and Network Ports

Once the image is prepared, you will need to work through setting up the network. Issue the following to create a load balancer, the necessary network ports for each control plane node, and associations between the two.

Creating loadbalancer:

# Create load balancer, updating vip-subnet-id if necessary
openstack loadbalancer create --name talos-control-plane --vip-subnet-id public

# Create listener
openstack loadbalancer listener create --name talos-control-plane-listener --protocol TCP --protocol-port 6443 talos-control-plane

# Pool and health monitoring
openstack loadbalancer pool create --name talos-control-plane-pool --lb-algorithm ROUND_ROBIN --listener talos-control-plane-listener --protocol TCP
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP talos-control-plane-pool

Creating ports:

# Create ports for control plane nodes, updating network name if necessary
openstack port create --network shared talos-control-plane-1
openstack port create --network shared talos-control-plane-2
openstack port create --network shared talos-control-plane-3

# Create floating IPs for the ports, so that you will have talosctl connectivity to each control plane
openstack floating ip create --port talos-control-plane-1 public
openstack floating ip create --port talos-control-plane-2 public
openstack floating ip create --port talos-control-plane-3 public

Note: Take notice of the private and public IPs associated with each of these ports, as they will be used in the next step. Additionally, take node of the port ID, as it will be used in server creation.

Associate port’s private IPs to loadbalancer:

# Create members for each port IP, updating subnet-id and address as necessary.
openstack loadbalancer member create --subnet-id shared-subnet --address <PRIVATE IP OF talos-control-plane-1 PORT> --protocol-port 6443 talos-control-plane-pool
openstack loadbalancer member create --subnet-id shared-subnet --address <PRIVATE IP OF talos-control-plane-2 PORT> --protocol-port 6443 talos-control-plane-pool
openstack loadbalancer member create --subnet-id shared-subnet --address <PRIVATE IP OF talos-control-plane-3 PORT> --protocol-port 6443 talos-control-plane-pool

Security Groups

This example uses the default security group in OpenStack. Ports have been opened to ensure that connectivity from both inside and outside the group is possible. You will want to allow, at a minimum, ports 6443 (Kubernetes API server) and 50000 (Talos API) from external sources. It is also recommended to allow communication over all ports from within the subnet.

Cluster Configuration

With our networking bits setup, we’ll fetch the IP for our load balancer and create our configuration files.

LB_PUBLIC_IP=$(openstack loadbalancer show talos-control-plane -f json | jq -r .vip_address)

talosctl gen config talos-k8s-openstack-tutorial https://${LB_PUBLIC_IP}:6443

Compute Creation

We are now ready to create our OpenStack nodes.

Create control plane:

# Create control plane 1. Substitute the correct path to configuration files and the desired flavor.
openstack server create talos-control-plane-1 --flavor m1.small --nic port-id=talos-control-plane-1 --image talos --user-data /path/to/init.yaml

# Create control planes 2 and 3, substituting the same info.
for i in $( seq 2 3 ); do
  openstack server create talos-control-plane-$i --flavor m1.small --nic port-id=talos-control-plane-$i --image talos --user-data /path/to/controlplane.yaml
done

Create worker:

# Update network name as necessary.
openstack server create talos-worker-1 --flavor m1.small --network shared --image talos --user-data /path/to/join.yaml

Note: This step can be repeated to add more workers.

Retrieve the kubeconfig

You should now be able to interact with your cluster with talosctl. We will use one of the floating IPs we allocated earlier. It does not matter which one.

talosctl --talosconfig ./talosconfig config endpoint <FLOATING_IP>
talosctl --talosconfig ./talosconfig config node <FLOATING_IP>
talosctl --talosconfig ./talosconfig kubeconfig
kubectl --kubeconfig ./kubeconfig get nodes

5 - Local Platforms

5.1 - Docker

Creating Talos Kubernetes cluster using Docker.

In this guide we will create a Kubernetes cluster in Docker, using a containerized version of Talos.

Running Talos in Docker is intended to be used in CI pipelines, and local testing when you need a quick and easy cluster. Furthermore, if you are running Talos in production, it provides an excellent way for developers to develop against the same version of Talos.

Requirements

The follow are requirements for running Talos in Docker:

  • Docker 18.03 or greater
  • a recent version of talosctl

Caveats

Due to the fact that Talos runs in a container, certain APIs are not available when running in Docker. For example upgrade, reset, and APIs like these don’t apply in container mode.

Create the Cluster

Creating a local cluster is as simple as:

talosctl cluster create --wait

Once the above finishes successfully, your talosconfig(~/.talos/config) will be configured to point to the new cluster.

If you are running on MacOS, an additional command is required:

talosctl config --endpoints 127.0.0.1

Note: Startup times can take up to a minute before the cluster is available.

Retrieve and Configure the kubeconfig

talosctl kubeconfig .
kubectl --kubeconfig kubeconfig config set-cluster talos-default --server https://127.0.0.1:6443

Using the Cluster

Once the cluster is available, you can make use of talosctl and kubectl to interact with the cluster. For example, to view current running containers, run talosctl containers for a list of containers in the system namespace, or talosctl containers -k for the k8s.io namespace. To view the logs of a container, use talosctl logs <container> or talosctl logs -k <container>.

Cleaning Up

To cleanup, run:

talosctl cluster destroy

5.2 - Firecracker

Creating Talos Kubernetes cluster using Firecracker VMs.

In this guide we will create a Kubernetes cluster using Firecracker.

Note: Talos on QEMU offers easier way to run Talos in a set of VMs.

Requirements

  • Linux
  • a kernel with
    • KVM enabled (/dev/kvm must exist)
    • CONFIG_NET_SCH_NETEM enabled
    • CONFIG_NET_SCH_INGRESS enabled
  • at least CAP_SYS_ADMIN and CAP_NET_ADMIN capabilities
  • firecracker (v0.21.0 or higher)
  • bridge, static and firewall CNI plugins from the standard CNI plugins, and tc-redirect-tap CNI plugin from the awslabs tc-redirect-tap installed to /opt/cni/bin
  • iptables
  • /etc/cni/conf.d directory should exist
  • /var/run/netns directory should exist

Installation

How to get firecracker (v0.21.0 or higher)

You can download firecracker binary via github.com/firecracker-microvm/firecracker/releases

curl https://github.com/firecracker-microvm/firecracker/releases/download/<version>/firecracker-<version>-<arch> -L -o firecracker

For example version v0.21.1 for linux platform:

curl https://github.com/firecracker-microvm/firecracker/releases/download/v0.21.1/firecracker-v0.21.1-x86_64 -L -o firecracker
sudo cp firecracker /usr/local/bin
sudo chmod +x /usr/local/bin/firecracker

Install talosctl

You can download talosctl and all required binaries via github.com/talos-systems/talos/releases

curl https://github.com/siderolabs/talos/releases/download/<version>/talosctl-<platform>-<arch> -L -o talosctl

For example version v0.8.0 for linux platform:

curl https://github.com/siderolabs/talos/releases/download/v0.8.0/talosctl-linux-amd64 -L -o talosctl
sudo cp talosctl /usr/local/bin
sudo chmod +x /usr/local/bin/talosctl

Install bridge, firewall and static required CNI plugins

You can download standard CNI required plugins via github.com/containernetworking/plugins/releases

curl https://github.com/containernetworking/plugins/releases/download/<version>/cni-plugins-<platform>-<arch>-<version>tgz -L -o cni-plugins-<platform>-<arch>-<version>.tgz

For example version v0.8.5 for linux platform:

curl https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz -L -o cni-plugins-linux-amd64-v0.8.5.tgz
mkdir cni-plugins-linux
tar -xf cni-plugins-linux-amd64-v0.8.5.tgz -C cni-plugins-linux
sudo mkdir -p /opt/cni/bin
sudo cp cni-plugins-linux/{bridge,firewall,static} /opt/cni/bin

Install tc-redirect-tap CNI plugin

You should install CNI plugin from the tc-redirect-tap repository github.com/awslabs/tc-redirect-tap

go get -d github.com/awslabs/tc-redirect-tap/cmd/tc-redirect-tap
cd $GOPATH/src/github.com/awslabs/tc-redirect-tap
make all
sudo cp tc-redirect-tap /opt/cni/bin

Note: if $GOPATH is not set, it defaults to ~/go.

Install Talos kernel and initramfs

Firecracker provisioner depends on Talos uncompressed kernel (vmlinuz) and initramfs (initramfs.xz). These files can be downloaded from the Talos release:

mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/<version>/vmlinuz -L -o _out/vmlinuz
curl https://github.com/siderolabs/talos/releases/download/<version>/initramfs.xz -L -o _out/initramfs.xz

For example version v0.8.0:

curl https://github.com/siderolabs/talos/releases/download/v0.8.0/vmlinuz -L -o _out/vmlinuz
curl https://github.com/siderolabs/talos/releases/download/v0.8.0/initramfs.xz -L -o _out/initramfs.xz

Create the Cluster

sudo talosctl cluster create --provisioner firecracker

Once the above finishes successfully, your talosconfig(~/.talos/config) will be configured to point to the new cluster.

Retrieve and Configure the kubeconfig

talosctl kubeconfig .

Using the Cluster

Once the cluster is available, you can make use of talosctl and kubectl to interact with the cluster. For example, to view current running containers, run talosctl containers for a list of containers in the system namespace, or talosctl containers -k for the k8s.io namespace. To view the logs of a container, use talosctl logs <container> or talosctl logs -k <container>.

A bridge interface will be created, and assigned the default IP 10.5.0.1. Each node will be directly accessible on the subnet specified at cluster creation time. A loadbalancer runs on 10.5.0.1 by default, which handles loadbalancing for the Talos, and Kubernetes APIs.

You can see a summary of the cluster state by running:

$ talosctl cluster show --provisioner firecracker
PROVISIONER       firecracker
NAME              talos-default
NETWORK NAME      talos-default
NETWORK CIDR      10.5.0.0/24
NETWORK GATEWAY   10.5.0.1
NETWORK MTU       1500

NODES:

NAME                     TYPE           IP         CPU    RAM      DISK
talos-default-master-1   Init           10.5.0.2   1.00   1.6 GB   4.3 GB
talos-default-master-2   ControlPlane   10.5.0.3   1.00   1.6 GB   4.3 GB
talos-default-master-3   ControlPlane   10.5.0.4   1.00   1.6 GB   4.3 GB
talos-default-worker-1   Join           10.5.0.5   1.00   1.6 GB   4.3 GB

Cleaning Up

To cleanup, run:

sudo talosctl cluster destroy --provisioner firecracker

Note: In that case that the host machine is rebooted before destroying the cluster, you may need to manually remove ~/.talos/clusters/talos-default.

Manual Clean Up

The talosctl cluster destroy command depends heavily on the clusters state directory. It contains all related information of the cluster. The PIDs and network associated with the cluster nodes.

If you happened to have deleted the state folder by mistake or you would like to cleanup the environment, here are the steps how to do it manually:

Stopping VMs

Find the process of firecracker --api-sock execute:

ps -elf | grep '[f]irecracker --api-sock'

To stop the VMs manually, execute:

sudo kill -s SIGTERM <PID>

Example output, where VMs are running with PIDs 158065 and 158216

ps -elf | grep '[f]irecracker --api-sock'
4 S root      158065  157615 44  80   0 - 264152 -     07:54 ?        00:34:25 firecracker --api-sock /root/.talos/clusters/k8s/k8s-master-1.sock
4 S root      158216  157617 18  80   0 - 264152 -     07:55 ?        00:14:47 firecracker --api-sock /root/.talos/clusters/k8s/k8s-worker-1.sock
sudo kill -s SIGTERM 158065
sudo kill -s SIGTERM 158216

Remove VMs

Find the process of talosctl firecracker-launch execute:

ps -elf | grep 'talosctl firecracker-launch'

To remove the VMs manually, execute:

sudo kill -s SIGTERM <PID>

Example output, where VMs are running with PIDs 157615 and 157617

ps -elf | grep '[t]alosctl firecracker-launch'
0 S root      157615    2835  0  80   0 - 184934 -     07:53 ?        00:00:00 talosctl firecracker-launch
0 S root      157617    2835  0  80   0 - 185062 -     07:53 ?        00:00:00 talosctl firecracker-launch
sudo kill -s SIGTERM 157615
sudo kill -s SIGTERM 157617

Remove load balancer

Find the process of talosctl loadbalancer-launch execute:

ps -elf | grep 'talosctl loadbalancer-launch'

To remove the LB manually, execute:

sudo kill -s SIGTERM <PID>

Example output, where loadbalancer is running with PID 157609

ps -elf | grep '[t]alosctl loadbalancer-launch'
4 S root      157609    2835  0  80   0 - 184998 -     07:53 ?        00:00:07 talosctl loadbalancer-launch --loadbalancer-addr 10.5.0.1 --loadbalancer-upstreams 10.5.0.2
sudo kill -s SIGTERM 157609

Remove network

This is more tricky part as if you have already deleted the state folder. If you didn’t then it is written in the state.yaml in the /root/.talos/clusters/<cluster-name> directory.

sudo cat /root/.talos/clusters/<cluster-name>/state.yaml | grep bridgename
bridgename: talos<uuid>

If you only had one cluster, then it will be the interface with name talos<uuid>

46: talos<uuid>: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether a6:72:f4:0a:d3:9c brd ff:ff:ff:ff:ff:ff
    inet 10.5.0.1/24 brd 10.5.0.255 scope global talos17c13299
       valid_lft forever preferred_lft forever
    inet6 fe80::a472:f4ff:fe0a:d39c/64 scope link
       valid_lft forever preferred_lft forever

To remove this interface:

sudo ip link del talos<uuid>

Remove state directory

To remove the state directory execute:

sudo rm -Rf /root/.talos/clusters/<cluster-name>

Troubleshooting

Logs

Inspect logs directory

sudo cat /root/.talos/clusters/<cluster-name>/*.log

Logs are saved under <cluster-name>-<role>-<node-id>.log

For example in case of k8s cluster name:

sudo ls -la /root/.talos/clusters/k8s | grep log
-rw-r--r--. 1 root root      69415 Apr 26 20:58 k8s-master-1.log
-rw-r--r--. 1 root root      68345 Apr 26 20:58 k8s-worker-1.log
-rw-r--r--. 1 root root      24621 Apr 26 20:59 lb.log

Inspect logs during the installation

sudo su -
tail -f /root/.talos/clusters/<cluster-name>/*.log

Post-installation

After executing these steps and you should be able to use kubectl

sudo talosctl kubeconfig .
mv kubeconfig $HOME/.kube/config
sudo chown $USER:$USER $HOME/.kube/config

5.3 - QEMU

Creating Talos Kubernetes cluster using QEMU VMs.

In this guide we will create a Kubernetes cluster using QEMU.

Video Walkthrough

To see a live demo of this writeup, see the video below:

Requirements

  • Linux
  • a kernel with
    • KVM enabled (/dev/kvm must exist)
    • CONFIG_NET_SCH_NETEM enabled
    • CONFIG_NET_SCH_INGRESS enabled
  • at least CAP_SYS_ADMIN and CAP_NET_ADMIN capabilities
  • QEMU
  • bridge, static and firewall CNI plugins from the standard CNI plugins, and tc-redirect-tap CNI plugin from the awslabs tc-redirect-tap installed to /opt/cni/bin (installed automatically by talosctl)
  • iptables
  • /var/run/netns directory should exist

Installation

How to get QEMU

Install QEMU with your operating system package manager. For example, on Ubuntu for x86:

apt install qemu-system-x86 qemu-kvm

Install talosctl

You can download talosctl and all required binaries via github.com/talos-systems/talos/releases

curl https://github.com/siderolabs/talos/releases/download/<version>/talosctl-<platform>-<arch> -L -o talosctl

For example version v0.8.0 for linux platform:

curl https://github.com/siderolabs/talos/releases/download/v0.8.0/talosctl-linux-amd64 -L -o talosctl
sudo cp talosctl /usr/local/bin
sudo chmod +x /usr/local/bin/talosctl

Install Talos kernel and initramfs

QEMU provisioner depends on Talos kernel (vmlinuz) and initramfs (initramfs.xz). These files can be downloaded from the Talos release:

mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/<version>/vmlinuz-<arch> -L -o _out/vmlinuz-<arch>
curl https://github.com/siderolabs/talos/releases/download/<version>/initramfs-<arch>.xz -L -o _out/initramfs-<arch>.xz

For example version v0.8.0:

curl https://github.com/siderolabs/talos/releases/download/v0.8.0/vmlinuz-amd64 -L -o _out/vmlinuz-amd64
curl https://github.com/siderolabs/talos/releases/download/v0.8.0/initramfs-amd64.xz -L -o _out/initramfs-amd64.xz

Create the Cluster

For the first time, create root state directory as your user so that you can inspect the logs as non-root user:

mkdir -p ~/.talos/clusters

Create the cluster:

sudo -E talosctl cluster create --provisioner qemu

Before the first cluster is created, talosctl will download the CNI bundle for the VM provisioning and install it to ~/.talos/cni directory.

Once the above finishes successfully, your talosconfig (~/.talos/config) will be configured to point to the new cluster, and kubeconfig will be downloaded and merged into default kubectl config location (~/.kube/config).

Cluster provisioning process can be optimized with registry pull-through caches.

Using the Cluster

Once the cluster is available, you can make use of talosctl and kubectl to interact with the cluster. For example, to view current running containers, run talosctl -n 10.5.0.2 containers for a list of containers in the system namespace, or talosctl -n 10.5.0.2 containers -k for the k8s.io namespace. To view the logs of a container, use talosctl -n 10.5.0.2 logs <container> or talosctl -n 10.5.0.2 logs -k <container>.

A bridge interface will be created, and assigned the default IP 10.5.0.1. Each node will be directly accessible on the subnet specified at cluster creation time. A loadbalancer runs on 10.5.0.1 by default, which handles loadbalancing for the Kubernetes APIs.

You can see a summary of the cluster state by running:

$ talosctl cluster show --provisioner qemu
PROVISIONER       qemu
NAME              talos-default
NETWORK NAME      talos-default
NETWORK CIDR      10.5.0.0/24
NETWORK GATEWAY   10.5.0.1
NETWORK MTU       1500

NODES:

NAME                     TYPE           IP         CPU    RAM      DISK
talos-default-master-1   Init           10.5.0.2   1.00   1.6 GB   4.3 GB
talos-default-master-2   ControlPlane   10.5.0.3   1.00   1.6 GB   4.3 GB
talos-default-master-3   ControlPlane   10.5.0.4   1.00   1.6 GB   4.3 GB
talos-default-worker-1   Join           10.5.0.5   1.00   1.6 GB   4.3 GB

Cleaning Up

To cleanup, run:

sudo -E talosctl cluster destroy --provisioner qemu

Note: In that case that the host machine is rebooted before destroying the cluster, you may need to manually remove ~/.talos/clusters/talos-default.

Manual Clean Up

The talosctl cluster destroy command depends heavily on the clusters state directory. It contains all related information of the cluster. The PIDs and network associated with the cluster nodes.

If you happened to have deleted the state folder by mistake or you would like to cleanup the environment, here are the steps how to do it manually:

Remove VM Launchers

Find the process of talosctl qemu-launch:

ps -elf | grep 'talosctl qemu-launch'

To remove the VMs manually, execute:

sudo kill -s SIGTERM <PID>

Example output, where VMs are running with PIDs 157615 and 157617

ps -elf | grep '[t]alosctl qemu-launch'
0 S root      157615    2835  0  80   0 - 184934 -     07:53 ?        00:00:00 talosctl qemu-launch
0 S root      157617    2835  0  80   0 - 185062 -     07:53 ?        00:00:00 talosctl qemu-launch
sudo kill -s SIGTERM 157615
sudo kill -s SIGTERM 157617

Stopping VMs

Find the process of qemu-system:

ps -elf | grep 'qemu-system'

To stop the VMs manually, execute:

sudo kill -s SIGTERM <PID>

Example output, where VMs are running with PIDs 158065 and 158216

ps -elf | grep qemu-system
2 S root     1061663 1061168 26  80   0 - 1786238 -    14:05 ?        01:53:56 qemu-system-x86_64 -m 2048 -drive format=raw,if=virtio,file=/home/username/.talos/clusters/talos-default/bootstrap-master.disk -smp cpus=2 -cpu max -nographic -netdev tap,id=net0,ifname=tap0,script=no,downscript=no -device virtio-net-pci,netdev=net0,mac=1e:86:c6:b4:7c:c4 -device virtio-rng-pci -no-reboot -boot order=cn,reboot-timeout=5000 -smbios type=1,uuid=7ec0a73c-826e-4eeb-afd1-39ff9f9160ca -machine q35,accel=kvm
2 S root     1061663 1061170 67  80   0 - 621014 -     21:23 ?        00:00:07 qemu-system-x86_64 -m 2048 -drive format=raw,if=virtio,file=/homeusername/.talos/clusters/talos-default/pxe-1.disk -smp cpus=2 -cpu max -nographic -netdev tap,id=net0,ifname=tap0,script=no,downscript=no -device virtio-net-pci,netdev=net0,mac=36:f3:2f:c3:9f:06 -device virtio-rng-pci -no-reboot -boot order=cn,reboot-timeout=5000 -smbios type=1,uuid=ce12a0d0-29c8-490f-b935-f6073ab916a6 -machine q35,accel=kvm
sudo kill -s SIGTERM 1061663
sudo kill -s SIGTERM 1061663

Remove load balancer

Find the process of talosctl loadbalancer-launch:

ps -elf | grep 'talosctl loadbalancer-launch'

To remove the LB manually, execute:

sudo kill -s SIGTERM <PID>

Example output, where loadbalancer is running with PID 157609

ps -elf | grep '[t]alosctl loadbalancer-launch'
4 S root      157609    2835  0  80   0 - 184998 -     07:53 ?        00:00:07 talosctl loadbalancer-launch --loadbalancer-addr 10.5.0.1 --loadbalancer-upstreams 10.5.0.2
sudo kill -s SIGTERM 157609

Remove DHCP server

Find the process of talosctl dhcpd-launch:

ps -elf | grep 'talosctl dhcpd-launch'

To remove the LB manually, execute:

sudo kill -s SIGTERM <PID>

Example output, where loadbalancer is running with PID 157609

ps -elf | grep '[t]alosctl dhcpd-launch'
4 S root      157609    2835  0  80   0 - 184998 -     07:53 ?        00:00:07 talosctl dhcpd-launch --state-path /home/username/.talos/clusters/talos-default --addr 10.5.0.1 --interface talosbd9c32bc
sudo kill -s SIGTERM 157609

Remove network

This is more tricky part as if you have already deleted the state folder. If you didn’t then it is written in the state.yaml in the ~/.talos/clusters/<cluster-name> directory.

sudo cat ~/.talos/clusters/<cluster-name>/state.yaml | grep bridgename
bridgename: talos<uuid>

If you only had one cluster, then it will be the interface with name talos<uuid>

46: talos<uuid>: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether a6:72:f4:0a:d3:9c brd ff:ff:ff:ff:ff:ff
    inet 10.5.0.1/24 brd 10.5.0.255 scope global talos17c13299
       valid_lft forever preferred_lft forever
    inet6 fe80::a472:f4ff:fe0a:d39c/64 scope link
       valid_lft forever preferred_lft forever

To remove this interface:

sudo ip link del talos<uuid>

Remove state directory

To remove the state directory execute:

sudo rm -Rf /home/$USER/.talos/clusters/<cluster-name>

Troubleshooting

Logs

Inspect logs directory

sudo cat ~/.talos/clusters/<cluster-name>/*.log

Logs are saved under <cluster-name>-<role>-<node-id>.log

For example in case of k8s cluster name:

ls -la ~/.talos/clusters/k8s | grep log
-rw-r--r--. 1 root root      69415 Apr 26 20:58 k8s-master-1.log
-rw-r--r--. 1 root root      68345 Apr 26 20:58 k8s-worker-1.log
-rw-r--r--. 1 root root      24621 Apr 26 20:59 lb.log

Inspect logs during the installation

tail -f ~/.talos/clusters/<cluster-name>/*.log

5.4 - VirtualBox

Creating Talos Kubernetes cluster using VurtualBox VMs.

In this guide we will create a Kubernetes cluster using VirtualBox.

Video Walkthrough

To see a live demo of this writeup, visit Youtube here:

Installation

How to Get VirtualBox

Install VirtualBox with your operating system package manager or from the website. For example, on Ubuntu for x86:

apt install virtualbox

Install talosctl

You can download talosctl via github.com/talos-systems/talos/releases

curl https://github.com/siderolabs/talos/releases/download/<version>/talosctl-<platform>-<arch> -L -o talosctl

For example version v0.8.0 for linux platform:

curl https://github.com/siderolabs/talos/releases/download/v0.8.0/talosctl-linux-amd64 -L -o talosctl
sudo cp talosctl /usr/local/bin
sudo chmod +x /usr/local/bin/talosctl

Download ISO Image

In order to install Talos in VirtualBox, you will need the ISO image from the Talos release page. You can download talos-amd64.iso via github.com/talos-systems/talos/releases

mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/<version>/talos-<arch>.iso -L -o _out/talos-<arch>.iso

For example version v0.8.0 for linux platform:

mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/v0.8.0/talos-amd64.iso -L -o _out/talos-amd64.iso

Create VMs

Start by creating a new VM by clicking the “New” button in the VirtualBox UI:

Supply a name for this VM, and specify the Type and Version:

Edit the memory to supply at least 2GB of RAM for the VM:

Proceed through the disk settings, keeping the defaults. You can increase the disk space if desired.

Once created, select the VM and hit “Settings”:

In the “System” section, supply at least 2 CPUs:

In the “Network” section, switch the network “Attached To” section to “Bridged Adapter”:

Finally, in the “Storage” section, select the optical drive and, on the right, select the ISO by browsing your filesystem:

Repeat this process for a second VM to use as a worker node. You can also repeat this for additional nodes desired.

Start Control Plane Node

Once the VMs have been created and updated, start the VM that will be the first control plane node. This VM will boot the ISO image specified earlier and enter “maintenance mode”. Once the machine has entered maintenance mode, there will be a console log that details the IP address that the node received. Take note of this IP address, which will be referred to as $CONTROL_PLANE_IP for the rest of this guide. If you wish to export this IP as a bash variable, simply issue a command like export CONTROL_PLANE_IP=1.2.3.4.

Generate Machine Configurations

With the IP address above, you can now generate the machine configurations to use for installing Talos and Kubernetes. Issue the following command, updating the output directory, cluster name, and control plane IP as you see fit:

talosctl gen config talos-vbox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out

This will create several files in the _out directory: init.yaml, controlplane.yaml, join.yaml, and talosconfig.

Create Control Plane Node

Using the init.yaml generated above, you can now apply this config using talosctl. Issue:

talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file _out/init.yaml

You should now see some action in the VirtualBox console for this VM. Talos will be installed to disk, the VM will reboot, and then Talos will configure the Kubernetes control plane on this VM.

Note: This process can be repeated multiple times to create an HA control plane. Simply apply controlplane.yaml instead of init.yaml for subsequent nodes.

Create Worker Node

Create at least a single worker node using a process similar to the control plane creation above. Start the worker node VM and wait for it to enter “maintenance mode”. Take note of the worker node’s IP address, which will be referred to as $WORKER_IP

Issue:

talosctl apply-config --insecure --nodes $WORKER_IP --file _out/join.yaml

Note: This process can be repeated multiple times to add additional workers.

Using the Cluster

Once the cluster is available, you can make use of talosctl and kubectl to interact with the cluster. For example, to view current running containers, run talosctl containers for a list of containers in the system namespace, or talosctl containers -k for the k8s.io namespace. To view the logs of a container, use talosctl logs <container> or talosctl logs -k <container>.

First, configure talosctl to talk to your control plane node by issuing the following, updating paths and IPs as necessary:

export TALOSCONFIG="_out/talosconfig"
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP

Retrieve and Configure the kubeconfig

Fetch the kubeconfig file from the control plane node by issuing:

talosctl kubeconfig

You can then use kubectl in this fashion:

kubectl get nodes

Cleaning Up

To cleanup, simply stop and delete the virtual machines from the VirtualBox UI.

6 - Single Board Computers

6.1 - Banana Pi M64

Installing Talos on Banana Pi M64 SBC using raw disk image.

Prerequisites

You will need

  • talosctl
  • an SD card

Download the latest alpha talosctl.

curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v0.8.4/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl

Download the Image

Download the image and decompress it:

curl -LO https://github.com/siderolabs/talos/releases/download/v0.8.4/metal-bananapi_m64-arm64.img.xz
xz -d metal-bananapi_m64-arm64.img.xz

Writing the Image

The path to your SD card can be found using fdisk on Linux or diskutil on Mac OS. In this example we will assume /dev/mmcblk0.

Now dd the image to your SD card:

sudo dd if=metal-bananapi_m64-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M

Bootstrapping the Node

Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:

talosctl apply-config --insecure --interactive --nodes <node IP or DNS name>

Once the interactive installation is applied, the cluster will form and you can then use kubectl.

Retrieve the kubeconfig

Retrieve the admin kubeconfig by running:

talosctl kubeconfig

6.2 - Libre Computer Board ALL-H3-CC

Installing Talos on Libre Computer Board ALL-H3-CC SBC using raw disk image.

Prerequisites

You will need

  • talosctl
  • an SD card

Download the latest alpha talosctl.

curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v0.8.4/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl

Download the Image

Download the image and decompress it:

curl -LO https://github.com/siderolabs/talos/releases/download/v0.8.4/metal-libretech_all_h3_cc_h5-arm64.img.xz
xz -d metal-libretech_all_h3_cc_h5-arm64.img.xz

Writing the Image

The path to your SD card can be found using fdisk on Linux or diskutil on Mac OS. In this example we will assume /dev/mmcblk0.

Now dd the image to your SD card:

sudo dd if=metal-libretech_all_h3_cc_h5-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M

Bootstrapping the Node

Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:

talosctl apply-config --insecure --interactive --nodes <node IP or DNS name>

Once the interactive installation is applied, the cluster will form and you can then use kubectl.

Retrieve the kubeconfig

Retrieve the admin kubeconfig by running:

talosctl kubeconfig

6.3 - Pine64 Rock64

Installing Talos on Pine64 Rock64 SBC using raw disk image.

Prerequisites

You will need

  • talosctl
  • an SD card

Download the latest alpha talosctl.

curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v0.8.4/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl

Download the Image

Download the image and decompress it:

curl -LO https://github.com/siderolabs/talos/releases/download/v0.8.4/metal-rock64-arm64.img.xz
xz -d metal-rock64-arm64.img.xz

Writing the Image

The path to your SD card can be found using fdisk on Linux or diskutil on Mac OS. In this example we will assume /dev/mmcblk0.

Now dd the image to your SD card:

sudo dd if=metal-rock64-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M

Bootstrapping the Node

Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:

talosctl apply-config --insecure --interactive --nodes <node IP or DNS name>

Once the interactive installation is applied, the cluster will form and you can then use kubectl.

Retrieve the kubeconfig

Retrieve the admin kubeconfig by running:

talosctl kubeconfig

6.4 - Raspberry Pi 4 Model B

Installing Talos on Rpi4 SBC using raw disk image.

Prerequisites

You will need

  • talosctl
  • an SD card

Download the latest alpha talosctl.

curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/download/v0.8.4/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl

Updating the EEPROM

At least version v2020.09.03-138a1 of the bootloader (rpi-eeprom) is required. To update the bootloader we will need an SD card. Insert the SD card into your computer and run the following: The path to your SD card can be found using fdisk on Linux or diskutil on Mac OS. In this example we will assume /dev/mmcblk0.

curl -LO https://github.com/raspberrypi/rpi-eeprom/releases/download/v2020.09.03-138a1/rpi-boot-eeprom-recovery-2020-09-03-vl805-000138a1.zip
sudo mkfs.fat -I /dev/mmcblk0
sudo mount /dev/mmcblk0 /mnt
sudo bsdtar rpi-boot-eeprom-recovery-2020-09-03-vl805-000138a1.zip -C /mnt

Remove the SD card from your local machine and insert it into the Raspberry Pi. Power the Raspberry Pi on, and wait at least 10 seconds. If successful, the green LED light will blink rapidly (forever), otherwise an error pattern will be displayed. If an HDMI display is attached then the screen will display green for success or red if a failure occurs. Power off the Raspberry Pi and remove the SD card from it.

Note: Updating the bootloader only needs to be done once.

Download the Image

Download the image and decompress it:

curl -LO https://github.com/siderolabs/talos/releases/download/v0.8.4/metal-rpi_4-arm64.img.xz
xz -d metal-rpi_4-arm64.img.xz

Writing the Image

Now dd the image to your SD card:

sudo dd if=metal-rpi_4-arm64.img of=/dev/mmcblk0 conv=fsync bs=4M

Bootstrapping the Node

Insert the SD card to your board, turn it on and wait for the console to show you the instructions for bootstrapping the node. Following the instructions in the console output to connect to the interactive installer:

talosctl apply-config --insecure --interactive --nodes <node IP or DNS name>

Once the interactive installation is applied, the cluster will form and you can then use kubectl.

Retrieve the kubeconfig

Retrieve the admin kubeconfig by running:

talosctl kubeconfig

Troubleshooting

The following table can be used to troubleshoot booting issues:

Long FlashesShort FlashesStatus
03Generic failure to boot
04start*.elf not found
07Kernel image not found
08SDRAM failure
09Insufficient SDRAM
010In HALT state
21Partition not FAT
22Failed to read from partition
23Extended partition not FAT
24File signature/hash mismatch - Pi 4
44Unsupported board type
45Fatal firmware error
46Power failure type A
47Power failure type B

7 - Guides

7.1 - Advanced Networking

Static Addressing

Static addressing is comprised of specifying cidr, routes ( remember to add your default gateway ), and interface. Most likely you’ll also want to define the nameservers so you have properly functioning DNS.

machine:
  network:
    hostname: talos
    nameservers:
      - 10.0.0.1
    interfaces:
      - interface: eth0
        cidr: 10.0.0.201/8
        mtu: 8765
        routes:
          - network: 0.0.0.0/0
            gateway: 10.0.0.1
      - interface: eth1
        ignore: true
  time:
    servers:
      - time.cloudflare.com

Additional Addresses for an Interface

In some environments you may need to set additional addresses on an interface. In the following example, we set two additional addresses on the loopback interface.

machine:
  network:
    interfaces:
      - interface: lo0
        cidr: 192.168.0.21/24
      - interface: lo0
        cidr: 10.2.2.2/24

Bonding

The following example shows how to create a bonded interface.

machine:
  network:
    interfaces:
      - interface: bond0
        dhcp: true
        bond:
          mode: 802.3ad
          lacpRate: fast
          xmitHashPolicy: layer3+4
          miimon: 100
          updelay: 200
          downdelay: 200
          interfaces:
            - eth0
            - eth1

VLANs

To setup vlans on a specific device use an array of VLANs to add. The master device may be configured without addressing by setting dhcp to false.

machine:
  network:
    interfaces:
      - interface: eth0
        dhcp: false
        vlans:
          - vlanId: 100
            cidr: "192.168.2.10/28"
            routes:
              - network: 0.0.0.0/0
                gateway: 192.168.2.1

7.2 - Air-gapped Environments

In this guide we will create a Talos cluster running in an air-gapped environment with all the required images being pulled from an internal registry. We will use the QEMU provisioner available in talosctl to create a local cluster, but the same approach could be used to deploy Talos in bigger air-gapped networks.

Requirements

The follow are requirements for this guide:

  • Docker 18.03 or greater
  • Requirements for the Talos QEMU cluster

Identifying Images

In air-gapped environments, access to the public Internet is restricted, so Talos can’t pull images from public Docker registries (docker.io, ghcr.io, etc.) We need to identify the images required to install and run Talos. The same strategy can be used for images required by custom workloads running on the cluster.

The talosctl images command provides a list of default images used by the Talos cluster (with default configuration settings). To print the list of images, run:

talosctl images

This list contains images required by a default deployment of Talos. There might be additional images required for the workloads running on this cluster, and those should be added to this list.

Preparing the Internal Registry

As access to the public registries is restricted, we have to run an internal Docker registry. In this guide, we will launch the registry on the same machine using Docker:

$ docker run -d -p 6000:5000 --restart always --name registry-aigrapped registry:2
1bf09802bee1476bc463d972c686f90a64640d87dacce1ac8485585de69c91a5

This registry will be accepting connections on port 6000 on the host IPs. The registry is empty by default, so we have fill it with the images required by Talos.

First, we pull all the images to our local Docker daemon:

$ for image in `talosctl images`; do docker pull $image; done
v0.12.0-amd64: Pulling from coreos/flannel
Digest: sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10
...

All images are now stored in the Docker daemon store:

$ docker images
ghcr.io/talos-systems/install-cni    v0.3.0-12-g90722c3      980d36ee2ee1        5 days ago          79.7MB
k8s.gcr.io/kube-proxy-amd64          v1.20.0                 33c60812eab8        2 weeks ago         118MB
...

Now we need to re-tag them so that we can push them to our local registry. We are going to replace the first component of the image name (before the first slash) with our registry endpoint 127.0.0.1:6000:

$ for image in `talosctl images`; do \
    docker tag $image `echo $image | sed -E 's#^[^/]+/#127.0.0.1:6000/#'` \
  done

As the next step, we push images to the internal registry:

$ for image in `talosctl images`; do \
    docker push `echo $image | sed -E 's#^[^/]+/#127.0.0.1:6000/#'` \
  done

We can now verify that the images are pushed to the registry:

$ curl  http://127.0.0.1:6000/v2/_catalog
{"repositories":["autonomy/kubelet","coredns","coreos/flannel","etcd-development/etcd","kube-apiserver-amd64","kube-controller-manager-amd64","kube-proxy-amd64","kube-scheduler-amd64","talos-systems/install-cni","talos-systems/installer"]}

Note: images in the registry don’t have the registry endpoint prefix anymore.

Launching Talos in an Air-gapped Environment

For Talos to use the internal registry, we use the registry mirror feature to redirect all the image pull requests to the internal registry. This means that the registry endpoint (as the first component of the image reference) gets ignored, and all pull requests are sent directly to the specified endpoint.

We are going to use a QEMU-based Talos cluster for this guide, but the same approach works with Docker-based clusters as well. As QEMU-based clusters go through the Talos install process, they can be used better to model a real air-gapped environment.

The talosctl cluster create command provides conveniences for common configuration options. The only required flag for this guide is --registry-mirror '*'=http://10.5.0.1:6000 which redirects every pull request to the internal registry. The endpoint being used is 10.5.0.1, as this is the default bridge interface address which will be routable from the QEMU VMs (127.0.0.1 IP will be pointing to the VM itself).

$ sudo -E talosctl cluster create --provisioner=qemu --registry-mirror '*'=http://10.5.0.1:6000 --install-image=ghcr.io/talos-systems/installer:v0.8.0
validating CIDR and reserving IPs
generating PKI and tokens
creating state directory in "/home/smira/.talos/clusters/talos-default"
creating network talos-default
creating load balancer
creating dhcpd
creating master nodes
creating worker nodes
waiting for API
...

Note: --install-image should match the image which was copied into the internal registry in the previous step.

You can be verify that the cluster is air-gapped by inspecting the registry logs: docker logs -f registry-airgapped.

Closing Notes

Running in an air-gapped environment might require additional configuration changes, for example using custom settings for DNS and NTP servers.

When scaling this guide to the bare-metal environment, following Talos config snippet could be used as an equivalent of the --registry-mirror flag above:

machine:
  ...
  registries:
      mirrors:
      '*':
          endpoints:
          - http://10.5.0.1:6000/
...

Other implementations of Docker registry can be used in place of the Docker registry image used above to run the registry. If required, auth can be configured for the internal registry (and custom TLS certificates if needed).

7.3 - Configuring Certificate Authorities

Appending the Certificate Authority

Put into each machine the PEM encoded certificate:

machine:
  ...
  files:
    - content: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----        
      permissions: 0644
      path: /etc/ssl/certs/ca-certificates
      op: append

7.4 - Configuring Containerd

The base containerd configuration expects to merge in any additional configs present in /var/cri/conf.d/*.toml.

An example of exposing metrics

Into each machine config, add the following:

machine:
  ...
  files:
    - content: |
        [metrics]
          address = "0.0.0.0:11234"        
      path: /var/cri/conf.d/metrics.toml
      op: create

Create cluster like normal and see that metrics are now present on this port:

$ curl 127.0.0.1:11234/v1/metrics
# HELP container_blkio_io_service_bytes_recursive_bytes The blkio io service bytes recursive
# TYPE container_blkio_io_service_bytes_recursive_bytes gauge
container_blkio_io_service_bytes_recursive_bytes{container_id="0677d73196f5f4be1d408aab1c4125cf9e6c458a4bea39e590ac779709ffbe14",device="/dev/dm-0",major="253",minor="0",namespace="k8s.io",op="Async"} 0
container_blkio_io_service_bytes_recursive_bytes{container_id="0677d73196f5f4be1d408aab1c4125cf9e6c458a4bea39e590ac779709ffbe14",device="/dev/dm-0",major="253",minor="0",namespace="k8s.io",op="Discard"} 0
...
...

7.5 - Configuring Corporate Proxies

Appending the Certificate Authority of MITM Proxies

Put into each machine the PEM encoded certificate:

machine:
  ...
  files:
    - content: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----        
      permissions: 0644
      path: /etc/ssl/certs/ca-certificates
      op: append

Configuring a Machine to Use the Proxy

To make use of a proxy:

machine:
  env:
    http_proxy: <http proxy>
    https_proxy: <https proxy>
    no_proxy: <no proxy>

Additionally, configure the DNS nameservers, and NTP servers:

machine:
  env:
  ...
  time:
    servers:
      - <server 1>
      - <server ...>
      - <server n>
  ...
  network:
    nameservers:
      - <ip 1>
      - <ip ...>
      - <ip n>

7.6 - Configuring Network Connectivity

Configuring Network Connectivity

The simplest way to deploy Talos is by ensuring that all the remote components of the system (talosctl, the control plane nodes, and worker nodes) all have layer 2 connectivity. This is not always possible, however, so this page lays out the minimal network access that is required to configure and operate a talos cluster.

Note: These are the ports required for Talos specifically, and should be configured in addition to the ports required by kubernetes. See the kubernetes docs for information on the ports used by kubernetes itself.

Control plane node(s)

ProtocolDirectionPort RangePurposeUsed By
TCPInbound50000*apidtalosctl
TCPInbound50001*trustdControl plane nodes, worker nodes

Ports marked with a * are not currently configurable, but that may change in the future. Follow along here.

Worker node(s)

ProtocolDirectionPort RangePurposeUsed By
TCPInbound50001*trustdControl plane nodes

Ports marked with a * are not currently configurable, but that may change in the future. Follow along here.

7.7 - Configuring Pull Through Cache

In this guide we will create a set of local caching Docker registry proxies to minimize local cluster startup time.

When running Talos locally, pulling images from Docker registries might take a significant amount of time. We spin up local caching pass-through registries to cache images and configure a local Talos cluster to use those proxies. A similar approach might be used to run Talos in production in air-gapped environments. It can be also used to verify that all the images are available in local registries.

Video Walkthrough

To see a live demo of this writeup, see the video below:

Requirements

The follow are requirements for creating the set of caching proxies:

  • Docker 18.03 or greater
  • Local cluster requirements for either docker or QEMU.

Launch the Caching Docker Registry Proxies

Talos pulls from docker.io, k8s.gcr.io, gcr.io, ghcr.io and quay.io by default. If your configuration is different, you might need to modify the commands below:

docker run -d -p 5000:5000 \
    -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io \
    --restart always \
    --name registry-docker.io registry:2

docker run -d -p 5001:5000 \
    -e REGISTRY_PROXY_REMOTEURL=https://k8s.gcr.io \
    --restart always \
    --name registry-k8s.gcr.io registry:2

docker run -d -p 5002:5000 \
    -e REGISTRY_PROXY_REMOTEURL=https://quay.io \
    --restart always \
    --name registry-quay.io registry:2.5

docker run -d -p 5003:5000 \
    -e REGISTRY_PROXY_REMOTEURL=https://gcr.io \
    --restart always \
    --name registry-gcr.io registry:2

docker run -d -p 5004:5000 \
    -e REGISTRY_PROXY_REMOTEURL=https://ghcr.io \
    --restart always \
    --name registry-ghcr.io registry:2

Note: Proxies are started as docker containers, and they’re automatically configured to start with Docker daemon. Please note that quay.io proxy doesn’t support recent Docker image schema, so we run older registry image version (2.5).

As a registry container can only handle a single upstream Docker registry, we launch a container per upstream, each on its own host port (5000, 5001, 5002, 5003 and 5004).

Using Caching Registries with QEMU Local Cluster

With a QEMU local cluster, a bridge interface is created on the host. As registry containers expose their ports on the host, we can use bridge IP to direct proxy requests.

sudo talosctl cluster create --provisioner qemu \
    --registry-mirror docker.io=http://10.5.0.1:5000 \
    --registry-mirror k8s.gcr.io=http://10.5.0.1:5001 \
    --registry-mirror quay.io=http://10.5.0.1:5002 \
    --registry-mirror gcr.io=http://10.5.0.1:5003 \
    --registry-mirror ghcr.io=http://10.5.0.1:5004

The Talos local cluster should now start pulling via caching registries. This can be verified via registry logs, e.g. docker logs -f registry-docker.io. The first time cluster boots, images are pulled and cached, so next cluster boot should be much faster.

Note: 10.5.0.1 is a bridge IP with default network (10.5.0.0/24), if using custom --cidr, value should be adjusted accordingly.

Using Caching Registries with docker Local Cluster

With a docker local cluster we can use docker bridge IP, default value for that IP is 172.17.0.1. On Linux, the docker bridge address can be inspected with ip addr show docker0.

talosctl cluster create --provisioner docker \
    --registry-mirror docker.io=http://172.17.0.1:5000 \
    --registry-mirror k8s.gcr.io=http://172.17.0.1:5001 \
    --registry-mirror quay.io=http://172.17.0.1:5002 \
    --registry-mirror gcr.io=http://172.17.0.1:5003 \
    --registry-mirror ghcr.io=http://172.17.0.1:5004

Cleaning Up

To cleanup, run:

docker rm -f registry-docker.io
docker rm -f registry-k8s.gcr.io
docker rm -f registry-quay.io
docker rm -f registry-gcr.io
docker rm -f registry-ghcr.io

Note: Removing docker registry containers also removes the image cache. So if you plan to use caching registries, keep the containers running.

7.8 - Configuring the Cluster Endpoint

In this section, we will step through the configuration of a Talos based Kubernetes cluster. There are three major components we will configure:

  • apid and talosctl
  • the master nodes
  • the worker nodes

Talos enforces a high level of security by using mutual TLS for authentication and authorization.

We recommend that the configuration of Talos be performed by a cluster owner. A cluster owner should be a person of authority within an organization, perhaps a director, manager, or senior member of a team. They are responsible for storing the root CA, and distributing the PKI for authorized cluster administrators.

Talos runs great out of the box, but if you tweak some minor settings it will make your life a lot easier in the future. This is not a requirement, but rather a document to explain some key settings.

Endpoint

To configure the talosctl endpoint, it is recommended you use a resolvable DNS name. This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip adres to the hostname configuration. The configuration can either be done on a Loadbalancer, or simply trough DNS.

For example:

This is in the config file for the cluster e.g. init.yaml, controlplane.yaml and join.yaml. for more details, please see: v1alpha1 endpoint configuration

.....
cluster:
  controlPlane:
    endpoint: https://endpoint.example.local:6443
.....

If you have a DNS name as the endpoint, you can upgrade your talos cluster with multiple controlplanes in the future (if you don’t have a multi-controlplane setup from the start) Using a DNS name generates the corresponding Certificates (Kubernetes and Talos) for the correct hostname.

7.9 - Customizing the Kernel

FROM scratch AS customization
COPY --from=<custom kernel image> /lib/modules /lib/modules

FROM docker.io/andrewrynhard/installer:latest
COPY --from=<custom kernel image> /boot/vmlinuz /usr/install/vmlinuz
docker build --build-arg RM="/lib/modules" -t talos-installer .

Note: You can use the --squash flag to create smaller images.

Now that we have a custom installer we can build Talos for the specific platform we wish to deploy to.

7.10 - Customizing the Root Filesystem

The installer image contains ONBUILD instructions that handle the following:

  • the decompression, and unpacking of the initramfs.xz
  • the unsquashing of the rootfs
  • the copying of new rootfs files
  • the squashing of the new rootfs
  • and the packing, and compression of the new initramfs.xz

When used as a base image, the installer will perform the above steps automatically with the requirement that a customization stage be defined in the Dockerfile.

For example, say we have an image that contains the contents of a library we wish to add to the Talos rootfs. We need to define a stage with the name customization:

FROM scratch AS customization
COPY --from=<name|index> <src> <dest>

Using a multi-stage Dockerfile we can define the customization stage and build FROM the installer image:

FROM scratch AS customization
COPY --from=<name|index> <src> <dest>

FROM ghcr.io/talos-systems/installer:latest

When building the image, the customization stage will automatically be copied into the rootfs. The customization stage is not limited to a single COPY instruction. In fact, you can do whatever you would like in this stage, but keep in mind that everything in / will be copied into the rootfs.

Note: <dest> is the path relative to the rootfs that you wish to place the contents of <src>.

To build the image, run:

docker build --squash -t <organization>/installer:latest .

In the case that you need to perform some cleanup before adding additional files to the rootfs, you can specify the RM build-time variable:

docker build --squash --build-arg RM="[<path> ...]" -t <organization>/installer:latest .

This will perform a rm -rf on the specified paths relative to the rootfs.

Note: RM must be a whitespace delimited list.

The resulting image can be used to:

  • generate an image for any of the supported providers
  • perform bare-metall installs
  • perform upgrades

We will step through common customizations in the remainder of this section.

7.11 - Managing PKI

Generating an Administrator Key Pair

In order to create a key pair, you will need the root CA.

Save the CA public key, and CA private key as ca.crt, and ca.key respectively. Now, run the following commands to generate a certificate:

talosctl gen key --name admin
talosctl gen csr --key admin.key --ip 127.0.0.1
talosctl gen crt --ca ca --csr admin.csr --name admin

Now, base64 encode admin.crt, and admin.key:

cat admin.crt | base64
cat admin.key | base64

You can now set the crt and key fields in the talosconfig to the base64 encoded strings.

Renewing an Expired Administrator Certificate

In order to renew the certificate, you will need the root CA, and the admin private key. The base64 encoded key can be found in any one of the control plane node’s configuration file. Where it is exactly will depend on the specific version of the configuration file you are using.

Save the CA public key, CA private key, and admin private key as ca.crt, ca.key, and admin.key respectively. Now, run the following commands to generate a certificate:

talosctl gen csr --key admin.key --ip 127.0.0.1
talosctl gen crt --ca ca --csr admin.csr --name admin

You should see admin.crt in your current directory. Now, base64 encode admin.crt:

cat admin.crt | base64

You can now set the certificate in the talosconfig to the base64 encoded string.

7.12 - Resetting a Machine

From time to time, it may be beneficial to reset a Talos machine to its “original” state. Bear in mind that this is a destructive action for the given machine. Doing this means removing the machine from Kubernetes, Etcd (if applicable), and clears any data on the machine that would normally persist a reboot.

The API command for doing this is talosctl reset. There are a couple of flags as part of this command:

Flags:
      --graceful   if true, attempt to cordon/drain node and leave etcd (if applicable) (default true)
      --reboot     if true, reboot the node after resetting instead of shutting down

The graceful flag is especially important when considering HA vs. non-HA Talos clusters. If the machine is part of an HA cluster, a normal, graceful reset should work just fine right out of the box as long as the cluster is in a good state. However, if this is a single node cluster being used for testing purposes, a graceful reset is not an option since Etcd cannot be “left” if there is only a single member. In this case, reset should be used with --graceful=false to skip performing checks that would normally block the reset.

7.13 - Storage

Talos is known to work with Rook and NFS.

Rook

We recommend at least Rook v1.5.

NFS

The NFS client is part of the kubelet image maintained by the Talos team. This means that the version installed in your running kubelet is the version of NFS supported by Talos.

7.14 - Upgrading Kubernetes

Video Walkthrough

To see a live demo of this writeup, see the video below:

Kubeconfig

In order to edit the control plane, we will need a working kubectl config. If you don’t already have one, you can get one by running:

talosctl --nodes <master node> kubeconfig

Automated Kubernetes Upgrade

To upgrade from Kubernetes v1.19.4 to v1.20.1 run:

$ talosctl --nodes <master node> upgrade-k8s --from 1.19.4 --to 1.20.1
patched kube-apiserver secrets for "service-account.key"
updating pod-checkpointer grace period to "0m"
sleeping 5m0s to let the pod-checkpointer self-checkpoint be updated
temporarily taking "kube-apiserver" out of pod-checkpointer control
updating daemonset "kube-apiserver" to version "1.20.1"
updating daemonset "kube-controller-manager" to version "1.20.1"
updating daemonset "kube-scheduler" to version "1.20.1"
updating daemonset "kube-proxy" to version "1.20.1"
updating pod-checkpointer grace period to "5m0s"

Manual Kubernetes Upgrade

Kubernetes can be upgraded manually as well by following the steps outlined below. They are equivalent to the steps performed by the talosctl upgrade-k8s command.

Patching kube-apiserver Secrets

Copy secret value service-account.key from the secret kube-controller-manager in kube-system namespace to the secret kube-apiserver.

After these changes, kube-apiserver secret should contain the following entries:

Data
====
service-account.key:
apiserver.key:
ca.crt:
front-proxy-client.crt:
apiserver-kubelet-client.crt:
encryptionconfig.yaml:
etcd-client.crt:
front-proxy-client.key:
service-account.pub:
apiserver.crt:
auditpolicy.yaml:
etcd-client.key:
apiserver-kubelet-client.key:
front-proxy-ca.crt:
etcd-client-ca.crt:

pod-checkpointer

Talos runs pod-checkpointer component which helps to recover control plane components (specifically, API server) if control plane is not healthy.

However, the way checkpoints interact with API server upgrade may make an upgrade take a lot longer due to a race condition on API server listen port.

In order to speed up upgrades, first lower pod-checkpointer grace period to zero (kubectl -n kube-system edit daemonset pod-checkpointer), change:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
      - name: pod-checkpointer
        command:
        ...
        - --checkpoint-grace-period=5m0s

to:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
      - name: pod-checkpointer
        command:
        ...
        - --checkpoint-grace-period=0s

Wait for 5 minutes to let pod-checkpointer update self-checkpoint to the new grace period.

API Server

In the API server’s DaemonSet, change:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
        - name: kube-apiserver
          image: k8s.gcr.io/kube-apiserver:v1.19.4
          command:
            - /go-runner
            - /usr/local/bin/kube-apiserver
      tolerations:
        - ...

to:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
        - name: kube-apiserver
          image: k8s.gcr.io/kube-apiserver:v1.20.1
          command:
            - /go-runner
            - /usr/local/bin/kube-apiserver
            - ...
            - --api-audiences=<control plane endpoint>
            - --service-account-issuer=<control plane endpoint>
            - --service-account-signing-key-file=/etc/kubernetes/secrets/service-account.key
      tolerations:
        - ...
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule

Summary of the changes:

  • update image version
  • add new toleration
  • add three new flags (replace <control plane endpoint> with the actual endpoint of the cluster, e.g. https://10.5.0.1:6443)

To edit the DaemonSet, run:

kubectl edit daemonsets -n kube-system kube-apiserver

Controller Manager

In the controller manager’s DaemonSet, change:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
        - name: kube-controller-manager
          image: k8s.gcr.io/kube-controller-manager:v1.19.4
      tolerations:
        - ...

to:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
        - name: kube-controller-manager
          image: k8s.gcr.io/kube-controller-manager:v1.20.1
      tolerations:
        - ...
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule

To edit the DaemonSet, run:

kubectl edit daemonsets -n kube-system kube-controller-manager

Scheduler

In the scheduler’s DaemonSet, change:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
        - name: kube-scheduler
          image: k8s.gcr.io/kube-scheduler:v1.19.4
      tolerations:
        - ...

to:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
        - name: kube-sceduler
          image: k8s.gcr.io/kube-scheduler:v1.20.1
      tolerations:
        - ...
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule

To edit the DaemonSet, run:

kubectl edit daemonsets -n kube-system kube-scheduler

Proxy

In the proxy’s DaemonSet, change:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
        - name: kube-proxy
          image: k8s.gcr.io/kube-proxy:v1.19.4
      tolerations:
        - ...

to:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
        - name: kube-proxy
          image: k8s.gcr.io/kube-proxy:v1.20.1
      tolerations:
        - ...
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule

To edit the DaemonSet, run:

kubectl edit daemonsets -n kube-system kube-proxy

Restoring pod-checkpointer

Restore grace period of 5 minutes (kubectl -n kube-system edit daemonset pod-checkpointer) and add new toleration, change:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
      - name: pod-checkpointer
        command:
        ...
        - --checkpoint-grace-period=0s
      tolerations:
        - ...

to:

kind: DaemonSet
...
spec:
  ...
  template:
    ...
    spec:
      containers:
      - name: pod-checkpointer
        command:
        ...
        - --checkpoint-grace-period=5m0s
      tolerations:
        - ...
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule

Kubelet

The Talos team now maintains an image for the kubelet that should be used starting with Kubernetes 1.20. The image for this release is ghcr.io/talos-systems/kubelet:v1.20.1. To explicitly set the image, we can use the official documentation. For example:

machine:
  ...
  kubelet:
    image: ghcr.io/talos-systems/kubelet:v1.20.1

7.15 - Upgrading Talos

Talos upgrades are effected by an API call. The talosctl CLI utility will facilitate this, or you can use the automatic upgrade features provided by the talos controller manager.

Video Walkthrough

To see a live demo of this writeup, see the video below:

talosctl Upgrade

To manually upgrade a Talos node, you will specify the node’s IP address and the installer container image for the version of Talos to which you wish to upgrade.

For instance, if your Talos node has the IP address 10.20.30.40 and you want to install the official version v0.8.0, you would enter a command such as:

  $ talosctl upgrade --nodes 10.20.30.40 \
      --image ghcr.io/talos-systems/installer:v0.8.0

There is an option to this command: --preserve, which can be used to explicitly tell Talos to either keep intact its ephemeral data or not. In most cases, it is correct to just let Talos perform its default action. However, if you are running a single-node control-plane, you will want to make sure that --preserve=true.

Talos Controller Manager

The Talos Controller Manager can coordinate upgrades of your nodes automatically. It ensures that a controllable number of nodes are being upgraded at any given time. It also applies an upgrade flow which allows you to classify some machines as early adopters and others as getting only stable, tested versions.

To find out more about the controller manager and to get it installed and configured, take a look at the GitHub page. Please note that the controller manager is still in fairly early development. More advanced features, such as time slot scheduling, will be coming in the future.

Changelog and Upgrade Notes

In an effort to create more production ready clusters, Talos will now taint control plane nodes as unschedulable. This means that any application you might have deployed must tolerate this taint if you intend on running the application on control plane nodes.

Another feature you will notice is the automatic uncordoning of nodes that have been upgraded. Talos will now uncordon a node if the cordon was initiated by the upgrade process.

Talosctl

The talosctl CLI now requires an explicit set of nodes. This can be configured with talos config nodes or set on the fly with talos --nodes.

8 - Reference

8.1 - API

Talos gRPC API reference.

Table of Contents

Top

common/common.proto

Data

FieldTypeLabelDescription
metadataMetadata
bytesbytes

DataResponse

FieldTypeLabelDescription
messagesDatarepeated

Empty

FieldTypeLabelDescription
metadataMetadata

EmptyResponse

FieldTypeLabelDescription
messagesEmptyrepeated

Error

FieldTypeLabelDescription
codeCode
messagestring
detailsgoogle.protobuf.Anyrepeated

Metadata

Common metadata message nested in all reply message types

FieldTypeLabelDescription
hostnamestringhostname of the server response comes from (injected by proxy)
errorstringerror is set if request failed to the upstream (rest of response is undefined)
statusgoogle.rpc.Statuserror as gRPC Status

Code

NameNumberDescription
FATAL0
LOCKED1

ContainerDriver

NameNumberDescription
CONTAINERD0
CRI1

Top

health/health.proto

HealthCheck

FieldTypeLabelDescription
statusHealthCheck.ServingStatus

HealthCheckResponse

FieldTypeLabelDescription
messagesHealthCheckrepeated

HealthWatchRequest

FieldTypeLabelDescription
interval_secondsint64

ReadyCheck

FieldTypeLabelDescription
statusReadyCheck.ReadyStatus

ReadyCheckResponse

FieldTypeLabelDescription
messagesReadyCheckrepeated

HealthCheck.ServingStatus

NameNumberDescription
UNKNOWN0
SERVING1
NOT_SERVING2

ReadyCheck.ReadyStatus

NameNumberDescription
UNKNOWN0
READY1
NOT_READY2

Health

Method NameRequest TypeResponse TypeDescription
Check.google.protobuf.EmptyHealthCheckResponse
WatchHealthWatchRequestHealthCheckResponse stream
Ready.google.protobuf.EmptyReadyCheckResponse

Top

machine/machine.proto

ApplyConfiguration

ApplyConfigurationResponse describes the response to a configuration request.

FieldTypeLabelDescription
metadatacommon.Metadata

ApplyConfigurationRequest

rpc applyConfiguration ApplyConfiguration describes a request to assert a new configuration upon a node.

FieldTypeLabelDescription
databytes
no_rebootbool

ApplyConfigurationResponse

FieldTypeLabelDescription
messagesApplyConfigurationrepeated

Bootstrap

The bootstrap message containing the bootstrap status.

FieldTypeLabelDescription
metadatacommon.Metadata

BootstrapRequest

rpc bootstrap

BootstrapResponse

FieldTypeLabelDescription
messagesBootstraprepeated

CNIConfig

FieldTypeLabelDescription
namestring
urlsstringrepeated

CPUInfo

FieldTypeLabelDescription
processoruint32
vendor_idstring
cpu_familystring
modelstring
model_namestring
steppingstring
microcodestring
cpu_mhzdouble
cache_sizestring
physical_idstring
siblingsuint32
core_idstring
cpu_coresuint32
apic_idstring
initial_apic_idstring
fpustring
fpu_exceptionstring
cpu_id_leveluint32
wpstring
flagsstringrepeated
bugsstringrepeated
bogo_mipsdouble
cl_flush_sizeuint32
cache_alignmentuint32
address_sizesstring
power_managementstring

CPUInfoResponse

FieldTypeLabelDescription
messagesCPUsInforepeated

CPUStat

FieldTypeLabelDescription
userdouble
nicedouble
systemdouble
idledouble
iowaitdouble
irqdouble
soft_irqdouble
stealdouble
guestdouble
guest_nicedouble

CPUsInfo

FieldTypeLabelDescription
metadatacommon.Metadata
cpu_infoCPUInforepeated

ClusterConfig

FieldTypeLabelDescription
namestring
control_planeControlPlaneConfig
cluster_networkClusterNetworkConfig
allow_scheduling_on_mastersbool

ClusterNetworkConfig

FieldTypeLabelDescription
dns_domainstring
cni_configCNIConfig

Container

The messages message containing the requested containers.

FieldTypeLabelDescription
metadatacommon.Metadata
containersContainerInforepeated

ContainerInfo

The messages message containing the requested containers.

FieldTypeLabelDescription
namespacestring
idstring
imagestring
piduint32
statusstring
pod_idstring
namestring

ContainersRequest

FieldTypeLabelDescription
namespacestring
drivercommon.ContainerDriverdriver might be default “containerd” or “cri”

ContainersResponse

FieldTypeLabelDescription
messagesContainerrepeated

ControlPlaneConfig

FieldTypeLabelDescription
endpointstring

CopyRequest

CopyRequest describes a request to copy data out of Talos node

Copy produces .tar.gz archive which is streamed back to the caller

FieldTypeLabelDescription
root_pathstringRoot path to start copying data out, it might be either a file or directory

DHCPOptionsConfig

FieldTypeLabelDescription
route_metricuint32

DiskStat

FieldTypeLabelDescription
namestring
read_completeduint64
read_mergeduint64
read_sectorsuint64
read_time_msuint64
write_completeduint64
write_mergeduint64
write_sectorsuint64
write_time_msuint64
io_in_progressuint64
io_time_msuint64
io_time_weighted_msuint64
discard_completeduint64
discard_mergeduint64
discard_sectorsuint64
discard_time_msuint64

DiskStats

FieldTypeLabelDescription
metadatacommon.Metadata
totalDiskStat
devicesDiskStatrepeated

DiskStatsResponse

FieldTypeLabelDescription
messagesDiskStatsrepeated

DiskUsageInfo

DiskUsageInfo describes a file or directory’s information for du command

FieldTypeLabelDescription
metadatacommon.Metadata
namestringName is the name (including prefixed path) of the file or directory
sizeint64Size indicates the number of bytes contained within the file
errorstringError describes any error encountered while trying to read the file information.
relative_namestringRelativeName is the name of the file or directory relative to the RootPath

DiskUsageRequest

DiskUsageRequest describes a request to list disk usage of directories and regular files

FieldTypeLabelDescription
recursion_depthint32RecursionDepth indicates how many levels of subdirectories should be recursed. The default (0) indicates that no limit should be enforced.
allboolAll write sizes for all files, not just directories.
thresholdint64Threshold exclude entries smaller than SIZE if positive, or entries greater than SIZE if negative.
pathsstringrepeatedDiskUsagePaths is the list of directories to calculate disk usage for.

DmesgRequest

dmesg

FieldTypeLabelDescription
followbool
tailbool

EtcdForfeitLeadership

FieldTypeLabelDescription
metadatacommon.Metadata
memberstring

EtcdForfeitLeadershipRequest

EtcdForfeitLeadershipResponse

FieldTypeLabelDescription
messagesEtcdForfeitLeadershiprepeated

EtcdLeaveCluster

FieldTypeLabelDescription
metadatacommon.Metadata

EtcdLeaveClusterRequest

EtcdLeaveClusterResponse

FieldTypeLabelDescription
messagesEtcdLeaveClusterrepeated

EtcdMemberList

FieldTypeLabelDescription
metadatacommon.Metadata
membersstringrepeated

EtcdMemberListRequest

FieldTypeLabelDescription
query_localbool

EtcdMemberListResponse

FieldTypeLabelDescription
messagesEtcdMemberListrepeated

Event

FieldTypeLabelDescription
metadatacommon.Metadata
datagoogle.protobuf.Any
idstring

EventsRequest

FieldTypeLabelDescription
tail_eventsint32
tail_idstring
tail_secondsint32

FileInfo

FileInfo describes a file or directory’s information

FieldTypeLabelDescription
metadatacommon.Metadata
namestringName is the name (including prefixed path) of the file or directory
sizeint64Size indicates the number of bytes contained within the file
modeuint32Mode is the bitmap of UNIX mode/permission flags of the file
modifiedint64Modified indicates the UNIX timestamp at which the file was last modified

TODO: unix timestamp or include proto’s Date type | | is_dir | bool | | IsDir indicates that the file is a directory | | error | string | | Error describes any error encountered while trying to read the file information. | | link | string | | Link is filled with symlink target | | relative_name | string | | RelativeName is the name of the file or directory relative to the RootPath |

GenerateConfigurationRequest

GenerateConfigurationRequest describes a request to generate a new configuration on a node.

FieldTypeLabelDescription
config_versionstring
cluster_configClusterConfig
machine_configMachineConfig
override_timegoogle.protobuf.Timestamp

GenerateConfigurationResponse

GenerateConfiguration describes the response to a generate configuration request.

FieldTypeLabelDescription
metadatacommon.Metadata
databytesrepeated
talosconfigbytes

Hostname

FieldTypeLabelDescription
metadatacommon.Metadata
hostnamestring

HostnameResponse

FieldTypeLabelDescription
messagesHostnamerepeated

InstallConfig

FieldTypeLabelDescription
install_diskstring
install_imagestring

ListRequest

ListRequest describes a request to list the contents of a directory.

FieldTypeLabelDescription
rootstringRoot indicates the root directory for the list. If not indicated, ‘/’ is presumed.
recurseboolRecurse indicates that subdirectories should be recursed.
recursion_depthint32RecursionDepth indicates how many levels of subdirectories should be recursed. The default (0) indicates that no limit should be enforced.
typesListRequest.TyperepeatedTypes indicates what file type should be returned. If not indicated, all files will be returned.

LoadAvg

FieldTypeLabelDescription
metadatacommon.Metadata
load1double
load5double
load15double

LoadAvgResponse

FieldTypeLabelDescription
messagesLoadAvgrepeated

LogsRequest

rpc logs The request message containing the process name.

FieldTypeLabelDescription
namespacestring
idstring
drivercommon.ContainerDriverdriver might be default “containerd” or “cri”
followbool
tail_linesint32

MachineConfig

FieldTypeLabelDescription
typeMachineConfig.MachineType
install_configInstallConfig
network_configNetworkConfig
kubernetes_versionstring

MemInfo

FieldTypeLabelDescription
memtotaluint64
memfreeuint64
memavailableuint64
buffersuint64
cacheduint64
swapcacheduint64
activeuint64
inactiveuint64
activeanonuint64
inactiveanonuint64
activefileuint64
inactivefileuint64
unevictableuint64
mlockeduint64
swaptotaluint64
swapfreeuint64
dirtyuint64
writebackuint64
anonpagesuint64
mappeduint64
shmemuint64
slabuint64
sreclaimableuint64
sunreclaimuint64
kernelstackuint64
pagetablesuint64
nfsunstableuint64
bounceuint64
writebacktmpuint64
commitlimituint64
committedasuint64
vmalloctotaluint64
vmallocuseduint64
vmallocchunkuint64
hardwarecorrupteduint64
anonhugepagesuint64
shmemhugepagesuint64
shmempmdmappeduint64
cmatotaluint64
cmafreeuint64
hugepagestotaluint64
hugepagesfreeuint64
hugepagesrsvduint64
hugepagessurpuint64
hugepagesizeuint64
directmap4kuint64
directmap2muint64
directmap1guint64

Memory

FieldTypeLabelDescription
metadatacommon.Metadata
meminfoMemInfo

MemoryResponse

FieldTypeLabelDescription
messagesMemoryrepeated

MountStat

The messages message containing the requested processes.

FieldTypeLabelDescription
filesystemstring
sizeuint64
availableuint64
mounted_onstring

Mounts

The messages message containing the requested df stats.

FieldTypeLabelDescription
metadatacommon.Metadata
statsMountStatrepeated

MountsResponse

FieldTypeLabelDescription
messagesMountsrepeated

NetDev

FieldTypeLabelDescription
namestring
rx_bytesuint64
rx_packetsuint64
rx_errorsuint64
rx_droppeduint64
rx_fifouint64
rx_frameuint64
rx_compresseduint64
rx_multicastuint64
tx_bytesuint64
tx_packetsuint64
tx_errorsuint64
tx_droppeduint64
tx_fifouint64
tx_collisionsuint64
tx_carrieruint64
tx_compresseduint64

NetworkConfig

FieldTypeLabelDescription
hostnamestring
interfacesNetworkDeviceConfigrepeated

NetworkDeviceConfig

FieldTypeLabelDescription
interfacestring
cidrstring
mtuint32
dhcpbool
ignorebool
dhcp_optionsDHCPOptionsConfig
routesRouteConfigrepeated

NetworkDeviceStats

FieldTypeLabelDescription
metadatacommon.Metadata
totalNetDev
devicesNetDevrepeated

NetworkDeviceStatsResponse

FieldTypeLabelDescription
messagesNetworkDeviceStatsrepeated

PhaseEvent

FieldTypeLabelDescription
phasestring
actionPhaseEvent.Action

PlatformInfo

FieldTypeLabelDescription
namestring
modestring

Process

FieldTypeLabelDescription
metadatacommon.Metadata
processesProcessInforepeated

ProcessInfo

FieldTypeLabelDescription
pidint32
ppidint32
statestring
threadsint32
cpu_timedouble
virtual_memoryuint64
resident_memoryuint64
commandstring
executablestring
argsstring

ProcessesRequest

rpc processes

ProcessesResponse

FieldTypeLabelDescription
messagesProcessrepeated

ReadRequest

FieldTypeLabelDescription
pathstring

Reboot

rpc reboot The reboot message containing the reboot status.

FieldTypeLabelDescription
metadatacommon.Metadata

RebootResponse

FieldTypeLabelDescription
messagesRebootrepeated

Recover

The recover message containing the recover status.

FieldTypeLabelDescription
metadatacommon.Metadata

RecoverRequest

FieldTypeLabelDescription
sourceRecoverRequest.Source

RecoverResponse

FieldTypeLabelDescription
messagesRecoverrepeated

Reset

The reset message containing the restart status.

FieldTypeLabelDescription
metadatacommon.Metadata

ResetPartitionSpec

rpc reset

FieldTypeLabelDescription
labelstring
wipebool

ResetRequest

FieldTypeLabelDescription
gracefulboolGraceful indicates whether node should leave etcd before the upgrade, it also enforces etcd checks before leaving.
rebootboolReboot indicates whether node should reboot or halt after resetting.
system_partitions_to_wipeResetPartitionSpecrepeatedSystem_partitions_to_wipe lists specific system disk partitions to be reset (wiped). If system_partitions_to_wipe is empty, all the partitions are erased.

ResetResponse

FieldTypeLabelDescription
messagesResetrepeated

Restart

FieldTypeLabelDescription
metadatacommon.Metadata

RestartRequest

rpc restart The request message containing the process to restart.

FieldTypeLabelDescription
namespacestring
idstring
drivercommon.ContainerDriverdriver might be default “containerd” or “cri”

RestartResponse

The messages message containing the restart status.

FieldTypeLabelDescription
messagesRestartrepeated

Rollback

FieldTypeLabelDescription
metadatacommon.Metadata

RollbackRequest

rpc rollback

RollbackResponse

FieldTypeLabelDescription
messagesRollbackrepeated

RouteConfig

FieldTypeLabelDescription
networkstring
gatewaystring
metricuint32

SequenceEvent

rpc events

FieldTypeLabelDescription
sequencestring
actionSequenceEvent.Action
errorcommon.Error

ServiceEvent

FieldTypeLabelDescription
msgstring
statestring
tsgoogle.protobuf.Timestamp

ServiceEvents

FieldTypeLabelDescription
eventsServiceEventrepeated

ServiceHealth

FieldTypeLabelDescription
unknownbool
healthybool
last_messagestring
last_changegoogle.protobuf.Timestamp

ServiceInfo

FieldTypeLabelDescription
idstring
statestring
eventsServiceEvents
healthServiceHealth

ServiceList

rpc servicelist

FieldTypeLabelDescription
metadatacommon.Metadata
servicesServiceInforepeated

ServiceListResponse

FieldTypeLabelDescription
messagesServiceListrepeated

ServiceRestart

FieldTypeLabelDescription
metadatacommon.Metadata
respstring

ServiceRestartRequest

FieldTypeLabelDescription
idstring

ServiceRestartResponse

FieldTypeLabelDescription
messagesServiceRestartrepeated

ServiceStart

FieldTypeLabelDescription
metadatacommon.Metadata
respstring

ServiceStartRequest

rpc servicestart

FieldTypeLabelDescription
idstring

ServiceStartResponse

FieldTypeLabelDescription
messagesServiceStartrepeated

ServiceStateEvent

FieldTypeLabelDescription
servicestring
actionServiceStateEvent.Action
messagestring

ServiceStop

FieldTypeLabelDescription
metadatacommon.Metadata
respstring

ServiceStopRequest

FieldTypeLabelDescription
idstring

ServiceStopResponse

FieldTypeLabelDescription
messagesServiceStoprepeated

Shutdown

rpc shutdown The messages message containing the shutdown status.

FieldTypeLabelDescription
metadatacommon.Metadata

ShutdownResponse

FieldTypeLabelDescription
messagesShutdownrepeated

SoftIRQStat

FieldTypeLabelDescription
hiuint64
timeruint64
net_txuint64
net_rxuint64
blockuint64
block_io_polluint64
taskletuint64
scheduint64
hrtimeruint64
rcuuint64

StartRequest

FieldTypeLabelDescription
idstring

StartResponse

FieldTypeLabelDescription
respstring

Stat

The messages message containing the requested stat.

FieldTypeLabelDescription
namespacestring
idstring
memory_usageuint64
cpu_usageuint64
pod_idstring
namestring

Stats

The messages message containing the requested stats.

FieldTypeLabelDescription
metadatacommon.Metadata
statsStatrepeated

StatsRequest

The request message containing the containerd namespace.

FieldTypeLabelDescription
namespacestring
drivercommon.ContainerDriverdriver might be default “containerd” or “cri”

StatsResponse

FieldTypeLabelDescription
messagesStatsrepeated

StopRequest

FieldTypeLabelDescription
idstring

StopResponse

FieldTypeLabelDescription
respstring

SystemStat

FieldTypeLabelDescription
metadatacommon.Metadata
boot_timeuint64
cpu_totalCPUStat
cpuCPUStatrepeated
irq_totaluint64
irquint64repeated
context_switchesuint64
process_createduint64
process_runninguint64
process_blockeduint64
soft_irq_totaluint64
soft_irqSoftIRQStat

SystemStatResponse

FieldTypeLabelDescription
messagesSystemStatrepeated

TaskEvent

FieldTypeLabelDescription
taskstring
actionTaskEvent.Action

Upgrade

FieldTypeLabelDescription
metadatacommon.Metadata
ackstring

UpgradeRequest

rpc upgrade

FieldTypeLabelDescription
imagestring
preservebool
stagebool

UpgradeResponse

FieldTypeLabelDescription
messagesUpgraderepeated

Version

FieldTypeLabelDescription
metadatacommon.Metadata
versionVersionInfo
platformPlatformInfo

VersionInfo

FieldTypeLabelDescription
tagstring
shastring
builtstring
go_versionstring
osstring
archstring

VersionResponse

FieldTypeLabelDescription
messagesVersionrepeated

ListRequest.Type

File type.

NameNumberDescription
REGULAR0Regular file (not directory, symlink, etc).
DIRECTORY1Directory.
SYMLINK2Symbolic link.

MachineConfig.MachineType

NameNumberDescription
TYPE_UNKNOWN0
TYPE_INIT1
TYPE_CONTROL_PLANE2
TYPE_JOIN3

PhaseEvent.Action

NameNumberDescription
START0
STOP1

RecoverRequest.Source

NameNumberDescription
ETCD0
APISERVER1

SequenceEvent.Action

NameNumberDescription
NOOP0
START1
STOP2

ServiceStateEvent.Action

NameNumberDescription
INITIALIZED0
PREPARING1
WAITING2
RUNNING3
STOPPING4
FINISHED5
FAILED6
SKIPPED7

TaskEvent.Action

NameNumberDescription
START0
STOP1

MachineService

The machine service definition.

Method NameRequest TypeResponse TypeDescription
ApplyConfigurationApplyConfigurationRequestApplyConfigurationResponse
BootstrapBootstrapRequestBootstrapResponse
ContainersContainersRequestContainersResponse
CopyCopyRequest.common.Data stream
CPUInfo.google.protobuf.EmptyCPUInfoResponse
DiskStats.google.protobuf.EmptyDiskStatsResponse
DmesgDmesgRequest.common.Data stream
EventsEventsRequestEvent stream
EtcdMemberListEtcdMemberListRequestEtcdMemberListResponse
EtcdLeaveClusterEtcdLeaveClusterRequestEtcdLeaveClusterResponse
EtcdForfeitLeadershipEtcdForfeitLeadershipRequestEtcdForfeitLeadershipResponse
GenerateConfigurationGenerateConfigurationRequestGenerateConfigurationResponse
Hostname.google.protobuf.EmptyHostnameResponse
Kubeconfig.google.protobuf.Empty.common.Data stream
ListListRequestFileInfo stream
DiskUsageDiskUsageRequestDiskUsageInfo stream
LoadAvg.google.protobuf.EmptyLoadAvgResponse
LogsLogsRequest.common.Data stream
Memory.google.protobuf.EmptyMemoryResponse
Mounts.google.protobuf.EmptyMountsResponse
NetworkDeviceStats.google.protobuf.EmptyNetworkDeviceStatsResponse
Processes.google.protobuf.EmptyProcessesResponse
ReadReadRequest.common.Data stream
Reboot.google.protobuf.EmptyRebootResponse
RestartRestartRequestRestartResponse
RollbackRollbackRequestRollbackResponse
ResetResetRequestResetResponse
RecoverRecoverRequestRecoverResponse
ServiceList.google.protobuf.EmptyServiceListResponse
ServiceRestartServiceRestartRequestServiceRestartResponse
ServiceStartServiceStartRequestServiceStartResponse
ServiceStopServiceStopRequestServiceStopResponse
Shutdown.google.protobuf.EmptyShutdownResponse
StatsStatsRequestStatsResponse
SystemStat.google.protobuf.EmptySystemStatResponse
UpgradeUpgradeRequestUpgradeResponse
Version.google.protobuf.EmptyVersionResponse

Top

network/network.proto

Interface

Interface represents a net.Interface

FieldTypeLabelDescription
indexuint32
mtuuint32
namestring
hardwareaddrstring
flagsInterfaceFlags
ipaddressstringrepeated

Interfaces

FieldTypeLabelDescription
metadatacommon.Metadata
interfacesInterfacerepeated

InterfacesResponse

FieldTypeLabelDescription
messagesInterfacesrepeated

Route

The messages message containing a route.

FieldTypeLabelDescription
interfacestringInterface is the interface over which traffic to this destination should be sent
destinationstringDestination is the network prefix CIDR which this route provides
gatewaystringGateway is the gateway address to which traffic to this destination should be sent
metricuint32Metric is the priority of the route, where lower metrics have higher priorities
scopeuint32Scope desribes the scope of this route
sourcestringSource is the source prefix CIDR for the route, if one is defined
familyAddressFamilyFamily is the address family of the route. Currently, the only options are AF_INET (IPV4) and AF_INET6 (IPV6).
protocolRouteProtocolProtocol is the protocol by which this route came to be in place
flagsuint32Flags indicate any special flags on the route

Routes

FieldTypeLabelDescription
metadatacommon.Metadata
routesRouterepeated

RoutesResponse

The messages message containing the routes.

FieldTypeLabelDescription
messagesRoutesrepeated

AddressFamily

NameNumberDescription
AF_UNSPEC0
AF_INET2
IPV42
AF_INET610
IPV610

InterfaceFlags

NameNumberDescription
FLAG_UNKNOWN0
FLAG_UP1
FLAG_BROADCAST2
FLAG_LOOPBACK3
FLAG_POINT_TO_POINT4
FLAG_MULTICAST5

RouteProtocol

NameNumberDescription
RTPROT_UNSPEC0
RTPROT_REDIRECT1Route installed by ICMP redirects
RTPROT_KERNEL2Route installed by kernel
RTPROT_BOOT3Route installed during boot
RTPROT_STATIC4Route installed by administrator
RTPROT_GATED8Route installed by gated
RTPROT_RA9Route installed by router advertisement
RTPROT_MRT10Route installed by Merit MRT
RTPROT_ZEBRA11Route installed by Zebra/Quagga
RTPROT_BIRD12Route installed by Bird
RTPROT_DNROUTED13Route installed by DECnet routing daemon
RTPROT_XORP14Route installed by XORP
RTPROT_NTK15Route installed by Netsukuku
RTPROT_DHCP16Route installed by DHCP
RTPROT_MROUTED17Route installed by Multicast daemon
RTPROT_BABEL42Route installed by Babel daemon

NetworkService

The network service definition.

Method NameRequest TypeResponse TypeDescription
Routes.google.protobuf.EmptyRoutesResponse
Interfaces.google.protobuf.EmptyInterfacesResponse

Top

security/security.proto

CertificateRequest

The request message containing the process name.

FieldTypeLabelDescription
csrbytes

CertificateResponse

The response message containing the requested logs.

FieldTypeLabelDescription
cabytes
crtbytes

ReadFileRequest

The request message for reading a file on disk.

FieldTypeLabelDescription
pathstring

ReadFileResponse

The response message for reading a file on disk.

FieldTypeLabelDescription
databytes

WriteFileRequest

The request message containing the process name.

FieldTypeLabelDescription
pathstring
databytes
permint32

WriteFileResponse

The response message containing the requested logs.

SecurityService

The security service definition.

Method NameRequest TypeResponse TypeDescription
CertificateCertificateRequestCertificateResponse
ReadFileReadFileRequestReadFileResponse
WriteFileWriteFileRequestWriteFileResponse

Top

storage/storage.proto

Disk

Disk represents a disk.

FieldTypeLabelDescription
sizeuint64Size indicates the disk size in bytes.
modelstringModel idicates the disk model.
device_namestringDeviceName indicates the disk name (e.g. sda).

DisksResponse

DisksResponse represents the response of the Disks RPC.

FieldTypeLabelDescription
metadatacommon.Metadata
disksDiskrepeated

StorageService

StorageService represents the storage service.

Method NameRequest TypeResponse TypeDescription
Disks.google.protobuf.EmptyDisksResponse

Top

time/time.proto

Time

FieldTypeLabelDescription
metadatacommon.Metadata
serverstring
localtimegoogle.protobuf.Timestamp
remotetimegoogle.protobuf.Timestamp

TimeRequest

The response message containing the ntp server

FieldTypeLabelDescription
serverstring

TimeResponse

The response message containing the ntp server, time, and offset

FieldTypeLabelDescription
messagesTimerepeated

TimeService

The time service definition.

Method NameRequest TypeResponse TypeDescription
Time.google.protobuf.EmptyTimeResponse
TimeCheckTimeRequestTimeResponse

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

8.2 - CLI

talosctl apply-config

Apply a new configuration to a node

talosctl apply-config [flags]

Options

      --cert-fingerprint strings   list of server certificate fingeprints to accept (defaults to no check)
  -f, --file string                the filename of the updated configuration
  -h, --help                       help for apply-config
  -i, --insecure                   apply the config using the insecure (encrypted with no auth) maintenance service
      --interactive                apply the config using text based interactive mode
      --no-reboot                  apply the config only after the reboot

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl bootstrap

Bootstrap the cluster

talosctl bootstrap [flags]

Options

  -h, --help   help for bootstrap

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl cluster create

Creates a local docker-based or QEMU-based kubernetes cluster

talosctl cluster create [flags]

Options

      --arch string                             cluster architecture (default "amd64")
      --cidr string                             CIDR of the cluster network (default "10.5.0.0/24")
      --cni-bin-path strings                    search path for CNI binaries (VM only) (default [/home/user/.talos/cni/bin])
      --cni-bundle-url string                   URL to download CNI bundle from (VM only) (default "https://github.com/siderolabs/talos/releases/download/v0.8.0-alpha.3/talosctl-cni-bundle-${ARCH}.tar.gz")
      --cni-cache-dir string                    CNI cache directory path (VM only) (default "/home/user/.talos/cni/cache")
      --cni-conf-dir string                     CNI config directory path (VM only) (default "/home/user/.talos/cni/conf.d")
      --cpus string                             the share of CPUs as fraction (each container/VM) (default "2.0")
      --crashdump                               print debug crashdump to stderr when cluster startup fails
      --custom-cni-url string                   install custom CNI from the URL (Talos cluster)
      --disk int                                default limit on disk size in MB (each VM) (default 6144)
      --disk-image-path string                  disk image to use
      --dns-domain string                       the dns domain to use for cluster (default "cluster.local")
      --docker-host-ip string                   Host IP to forward exposed ports to (Docker provisioner only) (default "0.0.0.0")
      --endpoint string                         use endpoint instead of provider defaults
  -p, --exposed-ports string                    Comma-separated list of ports/protocols to expose on init node. Ex -p <hostPort>:<containerPort>/<protocol (tcp or udp)> (Docker provisioner only)
  -h, --help                                    help for create
      --image string                            the image to use (default "ghcr.io/talos-systems/talos:latest")
      --init-node-as-endpoint                   use init node as endpoint instead of any load balancer endpoint
      --initrd-path string                      the uncompressed kernel image to use (default "_out/initramfs-${ARCH}.xz")
  -i, --input-dir string                        location of pre-generated config files
      --install-image string                    the installer image to use (default "ghcr.io/talos-systems/installer:latest")
      --iso-path string                         the ISO path to use for the initial boot (VM only)
      --kubernetes-version string               desired kubernetes version to run (default "1.20.1")
      --masters int                             the number of masters to create (default 1)
      --memory int                              the limit on memory usage in MB (each container/VM) (default 2048)
      --mtu int                                 MTU of the cluster network (default 1500)
      --nameservers strings                     list of nameservers to use (default [8.8.8.8,1.1.1.1])
      --registry-insecure-skip-verify strings   list of registry hostnames to skip TLS verification for
      --registry-mirror strings                 list of registry mirrors to use in format: <registry host>=<mirror URL>
      --skip-injecting-config                   skip injecting config from embedded metadata server, write config files to current directory
      --skip-kubeconfig                         skip merging kubeconfig from the created cluster
      --user-disk strings                       list of disks to create for each VM in format: <mount_point1>:<size1>:<mount_point2>:<size2>
      --vmlinuz-path string                     the compressed kernel image to use (default "_out/vmlinuz-${ARCH}")
      --wait                                    wait for the cluster to be ready before returning (default true)
      --wait-timeout duration                   timeout to wait for the cluster to be ready (default 20m0s)
      --with-apply-config                       enable apply config when the VM is starting in maintenance mode
      --with-bootloader                         enable bootloader to load kernel and initramfs from disk image after install (default true)
      --with-debug                              enable debug in Talos config to send service logs to the console
      --with-init-node                          create the cluster with an init node
      --with-uefi                               enable UEFI on x86_64 architecture (always enabled for arm64)
      --workers int                             the number of workers to create (default 1)

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
      --name string          the name of the cluster (default "talos-default")
  -n, --nodes strings        target the specified nodes
      --provisioner string   Talos cluster provisioner to use (default "docker")
      --state string         directory path to store cluster state (default "/home/user/.talos/clusters")
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl cluster - A collection of commands for managing local docker-based or firecracker-based clusters

talosctl cluster destroy

Destroys a local docker-based or firecracker-based kubernetes cluster

talosctl cluster destroy [flags]

Options

  -h, --help   help for destroy

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
      --name string          the name of the cluster (default "talos-default")
  -n, --nodes strings        target the specified nodes
      --provisioner string   Talos cluster provisioner to use (default "docker")
      --state string         directory path to store cluster state (default "/home/user/.talos/clusters")
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl cluster - A collection of commands for managing local docker-based or firecracker-based clusters

talosctl cluster show

Shows info about a local provisioned kubernetes cluster

talosctl cluster show [flags]

Options

  -h, --help   help for show

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
      --name string          the name of the cluster (default "talos-default")
  -n, --nodes strings        target the specified nodes
      --provisioner string   Talos cluster provisioner to use (default "docker")
      --state string         directory path to store cluster state (default "/home/user/.talos/clusters")
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl cluster - A collection of commands for managing local docker-based or firecracker-based clusters

talosctl cluster

A collection of commands for managing local docker-based or firecracker-based clusters

Options

  -h, --help                 help for cluster
      --name string          the name of the cluster (default "talos-default")
      --provisioner string   Talos cluster provisioner to use (default "docker")
      --state string         directory path to store cluster state (default "/home/user/.talos/clusters")

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl completion

Output shell completion code for the specified shell (bash or zsh)

Synopsis

Output shell completion code for the specified shell (bash or zsh). The shell code must be evaluated to provide interactive completion of talosctl commands. This can be done by sourcing it from the .bash_profile.

Note for zsh users: [1] zsh completions are only supported in versions of zsh >= 5.2

talosctl completion SHELL [flags]

Examples

# Installing bash completion on macOS using homebrew
## If running Bash 3.2 included with macOS
	brew install bash-completion
## or, if running Bash 4.1+
	brew install bash-completion@2
## If talosctl is installed via homebrew, this should start working immediately.
## If you've installed via other means, you may need add the completion to your completion directory
	talosctl completion bash > $(brew --prefix)/etc/bash_completion.d/talosctl

# Installing bash completion on Linux
## If bash-completion is not installed on Linux, please install the 'bash-completion' package
## via your distribution's package manager.
## Load the talosctl completion code for bash into the current shell
	source <(talosctl completion bash)
## Write bash completion code to a file and source if from .bash_profile
	talosctl completion bash > ~/.talos/completion.bash.inc
	printf "
		# talosctl shell completion
		source '$HOME/.talos/completion.bash.inc'
		" >> $HOME/.bash_profile
	source $HOME/.bash_profile
# Load the talosctl completion code for zsh[1] into the current shell
	source <(talosctl completion zsh)
# Set the talosctl completion code for zsh[1] to autoload on startup
talosctl completion zsh > "${fpath[1]}/_osctl"

Options

  -h, --help   help for completion

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl config add

Add a new context

talosctl config add <context> [flags]

Options

      --ca string    the path to the CA certificate
      --crt string   the path to the certificate
  -h, --help         help for add
      --key string   the path to the key

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl config context

Set the current context

talosctl config context <context> [flags]

Options

  -h, --help   help for context

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl config contexts

List contexts defined in Talos config

talosctl config contexts [flags]

Options

  -h, --help   help for contexts

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl config endpoint

Set the endpoint(s) for the current context

talosctl config endpoint <endpoint>... [flags]

Options

  -h, --help   help for endpoint

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl config merge

Merge additional contexts from another Talos config into the default config

Synopsis

Contexts with the same name are renamed while merging configs.

talosctl config merge <from> [flags]

Options

  -h, --help   help for merge

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl config node

Set the node(s) for the current context

talosctl config node <endpoint>... [flags]

Options

  -h, --help   help for node

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl config

Manage the client configuration

Options

  -h, --help   help for config

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl containers

List containers

talosctl containers [flags]

Options

  -h, --help         help for containers
  -k, --kubernetes   use the k8s.io containerd namespace
  -c, --use-cri      use the CRI driver

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl copy

Copy data out from the node

Synopsis

Creates an .tar.gz archive at the node starting at and streams it back to the client.

If ‘-’ is given for , archive is written to stdout. Otherwise archive is extracted to which should be an empty directory or talosctl creates a directory if doesn’t exist. Command doesn’t preserve ownership and access mode for the files in extract mode, while streamed .tar archive captures ownership and permission bits.

talosctl copy <src-path> -|<local-path> [flags]

Options

  -h, --help   help for copy

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl crashdump

Dump debug information about the cluster

talosctl crashdump [flags]

Options

      --control-plane-nodes strings   specify IPs of control plane nodes
  -h, --help                          help for crashdump
      --init-node string              specify IPs of init node
      --worker-nodes strings          specify IPs of worker nodes

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl dashboard

Cluster dashboard with real-time metrics

Synopsis

Provide quick UI to navigate through node real-time metrics.

Keyboard shortcuts:

  • h, : switch one node to the left
  • l, : switch one node to the right
  • j, : scroll process list down
  • k, : scroll process list up
  • : scroll process list half page down
  • : scroll process list half page up
  • : scroll process list one page down
  • : scroll process list one page up
talosctl dashboard [flags]

Options

  -h, --help                       help for dashboard
  -d, --update-interval duration   interval between updates (default 3s)

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl dmesg

Retrieve kernel logs

talosctl dmesg [flags]

Options

  -f, --follow   specify if the kernel log should be streamed
  -h, --help     help for dmesg
      --tail     specify if only new messages should be sent (makes sense only when combined with --follow)

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl etcd forfeit-leadership

Tell node to forfeit etcd cluster leadership

talosctl etcd forfeit-leadership [flags]

Options

  -h, --help   help for forfeit-leadership

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl etcd leave

Tell nodes to leave etcd cluster

talosctl etcd leave [flags]

Options

  -h, --help   help for leave

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl etcd members

Get the list of etcd cluster members

talosctl etcd members [flags]

Options

  -h, --help   help for members

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl etcd

Manage etcd

Options

  -h, --help   help for etcd

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl events

Stream runtime events

talosctl events [flags]

Options

      --duration duration   show events for the past duration interval (one second resolution, default is to show no history)
  -h, --help                help for events
      --since string        show events after the specified event ID (default is to show no history)
      --tail int32          show specified number of past events (use -1 to show full history, default is to show no history)

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl gen ca

Generates a self-signed X.509 certificate authority

talosctl gen ca [flags]

Options

  -h, --help                  help for ca
      --hours int             the hours from now on which the certificate validity period ends (default 87600)
      --organization string   X.509 distinguished name for the Organization
      --rsa                   generate in RSA format

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl gen config

Generates a set of configuration files for Talos cluster

Synopsis

The cluster endpoint is the URL for the Kubernetes API. If you decide to use a control plane node, common in a single node control plane setup, use port 6443 as this is the port that the API server binds to on every control plane node. For an HA setup, usually involving a load balancer, use the IP and port of the load balancer.

talosctl gen config <cluster name> <cluster endpoint> [flags]

Options

      --additional-sans strings     additional Subject-Alt-Names for the APIServer certificate
      --arch string                 the architecture of the cluster (default "amd64")
      --dns-domain string           the dns domain to use for cluster (default "cluster.local")
  -h, --help                        help for config
      --install-disk string         the disk to install to (default "/dev/sda")
      --install-image string        the image used to perform an installation (default "ghcr.io/talos-systems/installer:latest")
      --kubernetes-version string   desired kubernetes version to run
  -o, --output-dir string           destination to output generated files
  -p, --persist                     the desired persist value for configs (default true)
      --registry-mirror strings     list of registry mirrors to use in format: <registry host>=<mirror URL>
      --version string              the desired machine config version to generate (default "v1alpha1")

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl gen crt

Generates an X.509 Ed25519 certificate

talosctl gen crt [flags]

Options

      --ca string     path to the PEM encoded CERTIFICATE
      --csr string    path to the PEM encoded CERTIFICATE REQUEST
  -h, --help          help for crt
      --hours int     the hours from now on which the certificate validity period ends (default 24)
      --name string   the basename of the generated file

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl gen csr

Generates a CSR using an Ed25519 private key

talosctl gen csr [flags]

Options

  -h, --help         help for csr
      --ip string    generate the certificate for this IP address
      --key string   path to the PEM encoded EC or RSA PRIVATE KEY

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl gen key

Generates an Ed25519 private key

talosctl gen key [flags]

Options

  -h, --help          help for key
      --name string   the basename of the generated file

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl gen keypair

Generates an X.509 Ed25519 key pair

talosctl gen keypair [flags]

Options

  -h, --help                  help for keypair
      --ip string             generate the certificate for this IP address
      --organization string   X.509 distinguished name for the Organization

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl gen

Generate CAs, certificates, and private keys

Options

  -h, --help   help for gen

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

talosctl health

Check cluster health

talosctl health [flags]

Options

      --control-plane-nodes strings   specify IPs of control plane nodes
  -h, --help                          help for health
      --init-node string              specify IPs of init node
      --k8s-endpoint string           use endpoint instead of kubeconfig default
      --run-e2e                       run Kubernetes e2e test
      --server                        run server-side check (default true)
      --wait-timeout duration         timeout to wait for the cluster to be ready (default 20m0s)
      --worker-nodes strings          specify IPs of worker nodes

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl images

List the default images used by Talos

talosctl images [flags]

Options

  -h, --help   help for images

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl interfaces

List network interfaces

talosctl interfaces [flags]

Options

  -h, --help   help for interfaces

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl kubeconfig

Download the admin kubeconfig from the node

Synopsis

Download the admin kubeconfig from the node. If merge flag is defined, config will be merged with ~/.kube/config or [local-path] if specified. Otherwise kubeconfig will be written to PWD or [local-path] if specified.

talosctl kubeconfig [local-path] [flags]

Options

  -f, --force                       Force overwrite of kubeconfig if already present, force overwrite on kubeconfig merge
      --force-context-name string   Force context name for kubeconfig merge
  -h, --help                        help for kubeconfig
  -m, --merge                       Merge with existing kubeconfig (default true)

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl list

Retrieve a directory listing

talosctl list [path] [flags]

Options

  -d, --depth int32    maximum recursion depth
  -h, --help           help for list
  -H, --humanize       humanize size and time in the output
  -l, --long           display additional file details
  -r, --recurse        recurse into subdirectories
  -t, --type strings   filter by specified types:
                       f	regular file
                       d	directory
                       l, L	symbolic link

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl logs

Retrieve logs for a service

talosctl logs <service name> [flags]

Options

  -f, --follow       specify if the logs should be streamed
  -h, --help         help for logs
  -k, --kubernetes   use the k8s.io containerd namespace
      --tail int32   lines of log file to display (default is to show from the beginning) (default -1)
  -c, --use-cri      use the CRI driver

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl memory

Show memory usage

talosctl memory [flags]

Options

  -h, --help      help for memory
  -v, --verbose   display extended memory statistics

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl mounts

List mounts

talosctl mounts [flags]

Options

  -h, --help   help for mounts

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl processes

List running processes

talosctl processes [flags]

Options

  -h, --help          help for processes
  -s, --sort string   Column to sort output by. [rss|cpu] (default "rss")
  -w, --watch         Stream running processes

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl read

Read a file on the machine

talosctl read <path> [flags]

Options

  -h, --help   help for read

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl reboot

Reboot a node

talosctl reboot [flags]

Options

  -h, --help   help for reboot

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl recover

Recover a control plane

talosctl recover [flags]

Options

  -h, --help            help for recover
  -s, --source string   The data source for restoring the control plane manifests from (valid options are "apiserver" and "etcd") (default "apiserver")

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl reset

Reset a node

talosctl reset [flags]

Options

      --graceful                        if true, attempt to cordon/drain node and leave etcd (if applicable) (default true)
  -h, --help                            help for reset
      --reboot                          if true, reboot the node after resetting instead of shutting down
      --system-labels-to-wipe strings   if set, just wipe selected system disk partitions by label but keep other partitions intact

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl restart

Restart a process

talosctl restart <id> [flags]

Options

  -h, --help         help for restart
  -k, --kubernetes   use the k8s.io containerd namespace
  -c, --use-cri      use the CRI driver

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl rollback

Rollback a node to the previous installation

talosctl rollback [flags]

Options

  -h, --help   help for rollback

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl routes

List network routes

talosctl routes [flags]

Options

  -h, --help   help for routes

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl service

Retrieve the state of a service (or all services), control service state

Synopsis

Service control command. If run without arguments, lists all the services and their state. If service ID is specified, default action ‘status’ is executed which shows status of a single list service. With actions ‘start’, ‘stop’, ‘restart’, service state is updated respectively.

talosctl service [<id> [start|stop|restart|status]] [flags]

Options

  -h, --help   help for service

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl shutdown

Shutdown a node

talosctl shutdown [flags]

Options

  -h, --help   help for shutdown

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl stats

Get container stats

talosctl stats [flags]

Options

  -h, --help         help for stats
  -k, --kubernetes   use the k8s.io containerd namespace
  -c, --use-cri      use the CRI driver

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl time

Gets current server time

talosctl time [--check server] [flags]

Options

  -c, --check string   checks server time against specified ntp server
  -h, --help           help for time

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl upgrade

Upgrade Talos on the target node

talosctl upgrade [flags]

Options

  -h, --help           help for upgrade
  -i, --image string   the container image to use for performing the install
  -p, --preserve       preserve data
  -s, --stage          stage the upgrade to perform it after a reboot

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl upgrade-k8s

Upgrade Kubernetes control plane in the Talos cluster.

Synopsis

Command runs upgrade of Kubernetes control plane components between specified versions. Pod-checkpointer is handled in a special way to speed up kube-apisever upgrades.

talosctl upgrade-k8s [flags]

Options

      --arch string       the cluster architecture (default "amd64")
      --endpoint string   the cluster control plane endpoint
      --from string       the Kubernetes control plane version to upgrade from
  -h, --help              help for upgrade-k8s
      --to string         the Kubernetes control plane version to upgrade to (default "1.20.1")

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl usage

Retrieve a disk usage

talosctl usage [path1] [path2] ... [pathN] [flags]

Options

  -a, --all             write counts for all files, not just directories
  -d, --depth int32     maximum recursion depth
  -h, --help            help for usage
  -H, --humanize        humanize size and time in the output
  -t, --threshold int   threshold exclude entries smaller than SIZE if positive, or entries greater than SIZE if negative

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl validate

Validate config

talosctl validate [flags]

Options

  -c, --config string   the path of the config file
  -h, --help            help for validate
  -m, --mode string     the mode to validate the config for (valid values are metal, cloud, and container)

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl version

Prints the version

talosctl version [flags]

Options

      --client   Print client version only
  -h, --help     help for version
      --short    Print the short version

Options inherited from parent commands

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

  • talosctl - A CLI for out-of-band management of Kubernetes nodes created by Talos

talosctl

A CLI for out-of-band management of Kubernetes nodes created by Talos

Options

      --context string       Context to be used in command
  -e, --endpoints strings    override default endpoints in Talos configuration
  -h, --help                 help for talosctl
  -n, --nodes strings        target the specified nodes
      --talosconfig string   The path to the Talos configuration file (default "/home/user/.talos/config")

SEE ALSO

8.3 - Configuration

Package v1alpha1 configuration file contains all the options available for configuring a machine.

To generate a set of basic configuration files, run:

talosctl gen config --version v1alpha1 <cluster name> <cluster endpoint>

This will generate a machine config for each node type, and a talosconfig for the CLI.

Config

Config defines the v1alpha1 configuration file.

version: v1alpha1
persist: true
machine: # ...
cluster: # ...

version string

Indicates the schema used to decode the contents.

Valid values:

  • v1alpha1

debug bool

Enable verbose logging to the console. All system containers logs will flow into serial console.

Note: To avoid breaking Talos bootstrap flow enable this option only if serial console can handle high message throughput.

Valid values:

  • true

  • yes

  • false

  • no


persist bool

Indicates whether to pull the machine config upon every boot.

Valid values:

  • true

  • yes

  • false

  • no


Provides machine specific configuration options.


Provides cluster specific configuration options.


MachineConfig

MachineConfig represents the machine-specific config values.

Appears in:

type: controlplane
# InstallConfig represents the installation options for preparing a node.
install:
    disk: /dev/sda # The disk used for installations.
    # Allows for supplying extra kernel args via the bootloader.
    extraKernelArgs:
        - console=ttyS1
        - panic=10
    image: ghcr.io/talos-systems/installer:latest # Allows for supplying the image used to perform the installation.
    bootloader: true # Indicates if a bootloader should be installed.
    wipe: false # Indicates if the installation disk should be wiped at installation time.

type string

Defines the role of the machine within the cluster.

Init

Init node type designates the first control plane node to come up. You can think of it like a bootstrap node. This node will perform the initial steps to bootstrap the cluster – generation of TLS assets, starting of the control plane, etc.

Control Plane

Control Plane node type designates the node as a control plane member. This means it will host etcd along with the Kubernetes master components such as API Server, Controller Manager, Scheduler.

Worker

Worker node type designates the node as a worker node. This means it will be an available compute node for scheduling workloads.

Valid values:

  • init

  • controlplane

  • join


token string

The token is used by a machine to join the PKI of the cluster. Using this token, a machine will create a certificate signing request (CSR), and request a certificate that will be used as its’ identity.

Warning: It is important to ensure that this token is correct since a machine’s certificate has a short TTL by default.

Examples:

token: 328hom.uqjzh6jnn2eie9oi

ca PEMEncodedCertificateAndKey

The root certificate authority of the PKI. It is composed of a base64 encoded crt and key.

Examples:

ca:
    crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
    key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u

certSANs []string

Extra certificate subject alternative names for the machine’s certificate. By default, all non-loopback interface IPs are automatically added to the certificate’s SANs.

Examples:

certSANs:
    - 10.0.0.10
    - 172.16.0.10
    - 192.168.0.10

Used to provide additional options to the kubelet.

Examples:

kubelet:
    image: ghcr.io/talos-systems/kubelet:v1.20.1 # The `image` field is an optional reference to an alternative kubelet image.
    # The `extraArgs` field is used to provide additional flags to the kubelet.
    extraArgs:
        feature-gates: ServerSideApply=true

    # # The `extraMounts` field is used to add additional mounts to the kubelet container.
    # extraMounts:
    #     - destination: /var/lib/example
    #       type: bind
    #       source: /var/lib/example
    #       options:
    #         - rshared
    #         - rw

Provides machine specific network configuration options.

Examples:

network:
    hostname: worker-1 # Used to statically set the hostname for the machine.
    # `interfaces` is used to define the network interface configuration.
    interfaces:
        - interface: eth0 # The interface name.
          cidr: 192.168.2.0/24 # Assigns a static IP address to the interface.
          # A list of routes associated with the interface.
          routes:
            - network: 0.0.0.0/0 # The route's network.
              gateway: 192.168.2.1 # The route's gateway.
              metric: 1024 # The optional metric for the route.
          mtu: 1500 # The interface's MTU.

          # # Bond specific options.
          # bond:
          #     # The interfaces that make up the bond.
          #     interfaces:
          #         - eth0
          #         - eth1
          #     mode: 802.3ad # A bond option.
          #     lacpRate: fast # A bond option.

          # # Indicates if DHCP should be used to configure the interface.
          # dhcp: true

          # # DHCP specific options.
          # dhcpOptions:
          #     routeMetric: 1024 # The priority of all routes received via DHCP.
    # Used to statically set the nameservers for the machine.
    nameservers:
        - 9.8.7.6
        - 8.7.6.5

    # # Allows for extra entries to be added to the `/etc/hosts` file
    # extraHostEntries:
    #     - ip: 192.168.1.100 # The IP of the host.
    #       # The host alias.
    #       aliases:
    #         - example
    #         - example.domain.tld

disks []MachineDisk

Used to partition, format and mount additional disks. Since the rootfs is read only with the exception of /var, mounts are only valid if they are under /var. Note that the partitioning and formating is done only once, if and only if no existing partitions are found. If size: is omitted, the partition is sized to occupy the full disk.

Note: size is in units of bytes.

Examples:

disks:
    - device: /dev/sdb # The name of the disk to use.
      # A list of partitions to create on the disk.
      partitions:
        - mountpoint: /var/mnt/extra # Where to mount the partition.

          # # The size of partition: either bytes or human readable representation. If `size:` is omitted, the partition is sized to occupy the full disk.

          # # Human readable representation.
          # size: 100 MB
          # # Precise value in bytes.
          # size: 1073741824

Used to provide instructions for installations.

Examples:

install:
    disk: /dev/sda # The disk used for installations.
    # Allows for supplying extra kernel args via the bootloader.
    extraKernelArgs:
        - console=ttyS1
        - panic=10
    image: ghcr.io/talos-systems/installer:latest # Allows for supplying the image used to perform the installation.
    bootloader: true # Indicates if a bootloader should be installed.
    wipe: false # Indicates if the installation disk should be wiped at installation time.

files []MachineFile

Allows the addition of user specified files. The value of op can be create, overwrite, or append. In the case of create, path must not exist. In the case of overwrite, and append, path must be a valid file. If an op value of append is used, the existing file will be appended. Note that the file contents are not required to be base64 encoded.

Note: The specified path is relative to /var.

Examples:

files:
    - content: '...' # The contents of the file.
      permissions: 0o666 # The file's permissions in octal.
      path: /tmp/file.txt # The path of the file.
      op: append # The operation to use

env Env

The env field allows for the addition of environment variables. All environment variables are set on PID 1 in addition to every service.

Valid values:

  • GRPC_GO_LOG_VERBOSITY_LEVEL

  • GRPC_GO_LOG_SEVERITY_LEVEL

  • http_proxy

  • https_proxy

  • no_proxy

Examples:

env:
    GRPC_GO_LOG_SEVERITY_LEVEL: info
    GRPC_GO_LOG_VERBOSITY_LEVEL: "99"
    https_proxy: http://SERVER:PORT/
env:
    GRPC_GO_LOG_SEVERITY_LEVEL: error
    https_proxy: https://USERNAME:PASSWORD@SERVER:PORT/
env:
    https_proxy: http://DOMAIN\USERNAME:PASSWORD@SERVER:PORT/

Used to configure the machine’s time settings.

Examples:

time:
    disabled: false # Indicates if the time service is disabled for the machine.
    # Specifies time (NTP) servers to use for setting the system time.
    servers:
        - time.cloudflare.com

sysctls map[string]string

Used to configure the machine’s sysctls.

Examples:

sysctls:
    kernel.domainname: talos.dev
    net.ipv4.ip_forward: "0"

registries RegistriesConfig

Used to configure the machine’s container image registry mirrors.

Automatically generates matching CRI configuration for registry mirrors.

The mirrors section allows to redirect requests for images to non-default registry, which might be local registry or caching mirror.

The config section provides a way to authenticate to the registry with TLS client identity, provide registry CA, or authentication information. Authentication information has same meaning with the corresponding field in .docker/config.json.

See also matching configuration for CRI containerd plugin.

Examples:

registries:
    # Specifies mirror configuration for each registry.
    mirrors:
        docker.io:
            # List of endpoints (URLs) for registry mirrors to use.
            endpoints:
                - https://registry.local
    # Specifies TLS & auth configuration for HTTPS image registries.
    config:
        registry.local:
            # The TLS configuration for the registry.
            tls:
                # Enable mutual TLS authentication with the registry.
                clientIdentity:
                    crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
                    key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u
            # The auth configuration for this registry.
            auth:
                username: username # Optional registry authentication.
                password: password # Optional registry authentication.

ClusterConfig

ClusterConfig represents the cluster-wide config values.

Appears in:

# ControlPlaneConfig represents the control plane configuration options.
controlPlane:
    endpoint: https://1.2.3.4 # Endpoint is the canonical controlplane endpoint, which can be an IP address or a DNS hostname.
    localAPIServerPort: 443 # The port that the API server listens on internally.
clusterName: talos.local
# ClusterNetworkConfig represents kube networking configuration options.
network:
    # The CNI used.
    cni:
        name: flannel # Name of CNI to use.
    dnsDomain: cluster.local # The domain used by Kubernetes DNS.
    # The pod subnet CIDR.
    podSubnets:
        - 10.244.0.0/16
    # The service subnet CIDR.
    serviceSubnets:
        - 10.96.0.0/12

controlPlane ControlPlaneConfig

Provides control plane specific configuration options.

Examples:

controlPlane:
    endpoint: https://1.2.3.4 # Endpoint is the canonical controlplane endpoint, which can be an IP address or a DNS hostname.
    localAPIServerPort: 443 # The port that the API server listens on internally.

clusterName string

Configures the cluster’s name.


Provides cluster specific network configuration options.

Examples:

network:
    # The CNI used.
    cni:
        name: flannel # Name of CNI to use.
    dnsDomain: cluster.local # The domain used by Kubernetes DNS.
    # The pod subnet CIDR.
    podSubnets:
        - 10.244.0.0/16
    # The service subnet CIDR.
    serviceSubnets:
        - 10.96.0.0/12

token string

The bootstrap token used to join the cluster.

Examples:

token: wlzjyw.bei2zfylhs2by0wd

aescbcEncryptionSecret string

The key used for the encryption of secret data at rest.

Examples:

aescbcEncryptionSecret: z01mye6j16bspJYtTB/5SFX8j7Ph4JXxM2Xuu4vsBPM=

ca PEMEncodedCertificateAndKey

The base64 encoded root certificate authority used by Kubernetes.

Examples:

ca:
    crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
    key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u

apiServer APIServerConfig

API server specific configuration options.

Examples:

apiServer:
    image: k8s.gcr.io/kube-apiserver-amd64:v1.20.1 # The container image used in the API server manifest.
    # Extra arguments to supply to the API server.
    extraArgs:
        feature-gates: ServerSideApply=true
        http2-max-streams-per-connection: "32"
    # Extra certificate subject alternative names for the API server's certificate.
    certSANs:
        - 1.2.3.4
        - 4.5.6.7

controllerManager ControllerManagerConfig

Controller manager server specific configuration options.

Examples:

controllerManager:
    image: k8s.gcr.io/kube-controller-manager-amd64:v1.20.1 # The container image used in the controller manager manifest.
    # Extra arguments to supply to the controller manager.
    extraArgs:
        feature-gates: ServerSideApply=true

Kube-proxy server-specific configuration options

Examples:

proxy:
    image: k8s.gcr.io/kube-proxy-amd64:v1.20.1 # The container image used in the kube-proxy manifest.
    mode: ipvs # proxy mode of kube-proxy.
    # Extra arguments to supply to kube-proxy.
    extraArgs:
        proxy-mode: iptables

scheduler SchedulerConfig

Scheduler server specific configuration options.

Examples:

scheduler:
    image: k8s.gcr.io/kube-scheduler-amd64:v1.20.1 # The container image used in the scheduler manifest.
    # Extra arguments to supply to the scheduler.
    extraArgs:
        feature-gates: AllBeta=true

Etcd specific configuration options.

Examples:

etcd:
    image: gcr.io/etcd-development/etcd:v3.4.14 # The container image used to create the etcd service.
    # The `ca` is the root certificate authority of the PKI.
    ca:
        crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
        key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u
    # Extra arguments to supply to etcd.
    extraArgs:
        election-timeout: "5000"

podCheckpointer PodCheckpointer

Pod Checkpointer specific configuration options.

Examples:

podCheckpointer:
    image: '...' # The `image` field is an override to the default pod-checkpointer image.

coreDNS CoreDNS

Core DNS specific configuration options.

Examples:

coreDNS:
    image: k8s.gcr.io/coredns:1.7.0 # The `image` field is an override to the default coredns image.

extraManifests []string

A list of urls that point to additional manifests. These will get automatically deployed by bootkube.

Examples:

extraManifests:
    - https://www.example.com/manifest1.yaml
    - https://www.example.com/manifest2.yaml

extraManifestHeaders map[string]string

A map of key value pairs that will be added while fetching the ExtraManifests.

Examples:

extraManifestHeaders:
    Token: "1234567"
    X-ExtraInfo: info

adminKubeconfig AdminKubeconfigConfig

Settings for admin kubeconfig generation. Certificate lifetime can be configured.

Examples:

adminKubeconfig:
    certLifetime: 1h0m0s # Admin kubeconfig certificate lifetime (default is 1 year).

allowSchedulingOnMasters bool

Allows running workload on master nodes.

Valid values:

  • true

  • yes

  • false

  • no


KubeletConfig

KubeletConfig represents the kubelet config values.

Appears in:

image: ghcr.io/talos-systems/kubelet:v1.20.1 # The `image` field is an optional reference to an alternative kubelet image.
# The `extraArgs` field is used to provide additional flags to the kubelet.
extraArgs:
    feature-gates: ServerSideApply=true

# # The `extraMounts` field is used to add additional mounts to the kubelet container.
# extraMounts:
#     - destination: /var/lib/example
#       type: bind
#       source: /var/lib/example
#       options:
#         - rshared
#         - rw

image string

The image field is an optional reference to an alternative kubelet image.

Examples:

image: ghcr.io/talos-systems/kubelet:v1.20.1

extraArgs map[string]string

The extraArgs field is used to provide additional flags to the kubelet.

Examples:

extraArgs:
    key: value

extraMounts []Mount

The extraMounts field is used to add additional mounts to the kubelet container.

Examples:

extraMounts:
    - destination: /var/lib/example
      type: bind
      source: /var/lib/example
      options:
        - rshared
        - rw

NetworkConfig

NetworkConfig represents the machine’s networking config values.

Appears in:

hostname: worker-1 # Used to statically set the hostname for the machine.
# `interfaces` is used to define the network interface configuration.
interfaces:
    - interface: eth0 # The interface name.
      cidr: 192.168.2.0/24 # Assigns a static IP address to the interface.
      # A list of routes associated with the interface.
      routes:
        - network: 0.0.0.0/0 # The route's network.
          gateway: 192.168.2.1 # The route's gateway.
          metric: 1024 # The optional metric for the route.
      mtu: 1500 # The interface's MTU.

      # # Bond specific options.
      # bond:
      #     # The interfaces that make up the bond.
      #     interfaces:
      #         - eth0
      #         - eth1
      #     mode: 802.3ad # A bond option.
      #     lacpRate: fast # A bond option.

      # # Indicates if DHCP should be used to configure the interface.
      # dhcp: true

      # # DHCP specific options.
      # dhcpOptions:
      #     routeMetric: 1024 # The priority of all routes received via DHCP.
# Used to statically set the nameservers for the machine.
nameservers:
    - 9.8.7.6
    - 8.7.6.5

# # Allows for extra entries to be added to the `/etc/hosts` file
# extraHostEntries:
#     - ip: 192.168.1.100 # The IP of the host.
#       # The host alias.
#       aliases:
#         - example
#         - example.domain.tld

hostname string

Used to statically set the hostname for the machine.


interfaces []Device

interfaces is used to define the network interface configuration. By default all network interfaces will attempt a DHCP discovery. This can be further tuned through this configuration parameter.

Examples:

interfaces:
    - interface: eth0 # The interface name.
      cidr: 192.168.2.0/24 # Assigns a static IP address to the interface.
      # A list of routes associated with the interface.
      routes:
        - network: 0.0.0.0/0 # The route's network.
          gateway: 192.168.2.1 # The route's gateway.
          metric: 1024 # The optional metric for the route.
      mtu: 1500 # The interface's MTU.

      # # Bond specific options.
      # bond:
      #     # The interfaces that make up the bond.
      #     interfaces:
      #         - eth0
      #         - eth1
      #     mode: 802.3ad # A bond option.
      #     lacpRate: fast # A bond option.

      # # Indicates if DHCP should be used to configure the interface.
      # dhcp: true

      # # DHCP specific options.
      # dhcpOptions:
      #     routeMetric: 1024 # The priority of all routes received via DHCP.

nameservers []string

Used to statically set the nameservers for the machine. Defaults to 1.1.1.1 and 8.8.8.8

Examples:

nameservers:
    - 8.8.8.8
    - 1.1.1.1

extraHostEntries []ExtraHost

Allows for extra entries to be added to the /etc/hosts file

Examples:

extraHostEntries:
    - ip: 192.168.1.100 # The IP of the host.
      # The host alias.
      aliases:
        - example
        - example.domain.tld

InstallConfig

InstallConfig represents the installation options for preparing a node.

Appears in:

disk: /dev/sda # The disk used for installations.
# Allows for supplying extra kernel args via the bootloader.
extraKernelArgs:
    - console=ttyS1
    - panic=10
image: ghcr.io/talos-systems/installer:latest # Allows for supplying the image used to perform the installation.
bootloader: true # Indicates if a bootloader should be installed.
wipe: false # Indicates if the installation disk should be wiped at installation time.

disk string

The disk used for installations.

Examples:

disk: /dev/sda
disk: /dev/nvme0

extraKernelArgs []string

Allows for supplying extra kernel args via the bootloader.

Examples:

extraKernelArgs:
    - talos.platform=metal
    - reboot=k

image string

Allows for supplying the image used to perform the installation. Image reference for each Talos release can be found on GitHub releases page.

Examples:

image: ghcr.io/talos-systems/installer:latest

bootloader bool

Indicates if a bootloader should be installed.

Valid values:

  • true

  • yes

  • false

  • no


wipe bool

Indicates if the installation disk should be wiped at installation time. Defaults to true.

Valid values:

  • true

  • yes

  • false

  • no


TimeConfig

TimeConfig represents the options for configuring time on a machine.

Appears in:

disabled: false # Indicates if the time service is disabled for the machine.
# Specifies time (NTP) servers to use for setting the system time.
servers:
    - time.cloudflare.com

disabled bool

Indicates if the time service is disabled for the machine. Defaults to false.


servers []string

Specifies time (NTP) servers to use for setting the system time. Defaults to pool.ntp.org

This parameter only supports a single time server.


RegistriesConfig

RegistriesConfig represents the image pull options.

Appears in:

# Specifies mirror configuration for each registry.
mirrors:
    docker.io:
        # List of endpoints (URLs) for registry mirrors to use.
        endpoints:
            - https://registry.local
# Specifies TLS & auth configuration for HTTPS image registries.
config:
    registry.local:
        # The TLS configuration for the registry.
        tls:
            # Enable mutual TLS authentication with the registry.
            clientIdentity:
                crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
                key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u
        # The auth configuration for this registry.
        auth:
            username: username # Optional registry authentication.
            password: password # Optional registry authentication.

mirrors map[string]RegistryMirrorConfig

Specifies mirror configuration for each registry. This setting allows to use local pull-through caching registires, air-gapped installations, etc.

Registry name is the first segment of image identifier, with ‘docker.io’ being default one. To catch any registry names not specified explicitly, use ‘*’.

Examples:

mirrors:
    ghcr.io:
        # List of endpoints (URLs) for registry mirrors to use.
        endpoints:
            - https://registry.insecure
            - https://ghcr.io/v2/

config map[string]RegistryConfig

Specifies TLS & auth configuration for HTTPS image registries. Mutual TLS can be enabled with ‘clientIdentity’ option.

TLS configuration can be skipped if registry has trusted server certificate.

Examples:

config:
    registry.insecure:
        # The TLS configuration for the registry.
        tls:
            insecureSkipVerify: true # Skip TLS server certificate verification (not recommended).

            # # Enable mutual TLS authentication with the registry.
            # clientIdentity:
            #     crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
            #     key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u

        # # The auth configuration for this registry.
        # auth:
        #     username: username # Optional registry authentication.
        #     password: password # Optional registry authentication.

PodCheckpointer

PodCheckpointer represents the pod-checkpointer config values.

Appears in:

image: '...' # The `image` field is an override to the default pod-checkpointer image.

image string

The image field is an override to the default pod-checkpointer image.


CoreDNS

CoreDNS represents the CoreDNS config values.

Appears in:

image: k8s.gcr.io/coredns:1.7.0 # The `image` field is an override to the default coredns image.

image string

The image field is an override to the default coredns image.


Endpoint

Endpoint represents the endpoint URL parsed out of the machine config.

Appears in:

https://1.2.3.4:6443
https://cluster1.internal:6443

ControlPlaneConfig

ControlPlaneConfig represents the control plane configuration options.

Appears in:

endpoint: https://1.2.3.4 # Endpoint is the canonical controlplane endpoint, which can be an IP address or a DNS hostname.
localAPIServerPort: 443 # The port that the API server listens on internally.

endpoint Endpoint

Endpoint is the canonical controlplane endpoint, which can be an IP address or a DNS hostname. It is single-valued, and may optionally include a port number.

Examples:

endpoint: https://1.2.3.4:6443
endpoint: https://cluster1.internal:6443

localAPIServerPort int

The port that the API server listens on internally. This may be different than the port portion listed in the endpoint field above. The default is 6443.


APIServerConfig

APIServerConfig represents the kube apiserver configuration options.

Appears in:

image: k8s.gcr.io/kube-apiserver-amd64:v1.20.1 # The container image used in the API server manifest.
# Extra arguments to supply to the API server.
extraArgs:
    feature-gates: ServerSideApply=true
    http2-max-streams-per-connection: "32"
# Extra certificate subject alternative names for the API server's certificate.
certSANs:
    - 1.2.3.4
    - 4.5.6.7

image string

The container image used in the API server manifest.

Examples:

image: k8s.gcr.io/kube-apiserver-amd64:v1.20.1

extraArgs map[string]string

Extra arguments to supply to the API server.


certSANs []string

Extra certificate subject alternative names for the API server’s certificate.


ControllerManagerConfig

ControllerManagerConfig represents the kube controller manager configuration options.

Appears in:

image: k8s.gcr.io/kube-controller-manager-amd64:v1.20.1 # The container image used in the controller manager manifest.
# Extra arguments to supply to the controller manager.
extraArgs:
    feature-gates: ServerSideApply=true

image string

The container image used in the controller manager manifest.

Examples:

image: k8s.gcr.io/kube-controller-manager-amd64:v1.20.1

extraArgs map[string]string

Extra arguments to supply to the controller manager.


ProxyConfig

ProxyConfig represents the kube proxy configuration options.

Appears in:

image: k8s.gcr.io/kube-proxy-amd64:v1.20.1 # The container image used in the kube-proxy manifest.
mode: ipvs # proxy mode of kube-proxy.
# Extra arguments to supply to kube-proxy.
extraArgs:
    proxy-mode: iptables

image string

The container image used in the kube-proxy manifest.

Examples:

image: k8s.gcr.io/kube-proxy-amd64:v1.20.1

mode string

proxy mode of kube-proxy. The default is ‘iptables’.


extraArgs map[string]string

Extra arguments to supply to kube-proxy.


SchedulerConfig

SchedulerConfig represents the kube scheduler configuration options.

Appears in:

image: k8s.gcr.io/kube-scheduler-amd64:v1.20.1 # The container image used in the scheduler manifest.
# Extra arguments to supply to the scheduler.
extraArgs:
    feature-gates: AllBeta=true

image string

The container image used in the scheduler manifest.

Examples:

image: k8s.gcr.io/kube-scheduler-amd64:v1.20.1

extraArgs map[string]string

Extra arguments to supply to the scheduler.


EtcdConfig

EtcdConfig represents the etcd configuration options.

Appears in:

image: gcr.io/etcd-development/etcd:v3.4.14 # The container image used to create the etcd service.
# The `ca` is the root certificate authority of the PKI.
ca:
    crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
    key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u
# Extra arguments to supply to etcd.
extraArgs:
    election-timeout: "5000"

image string

The container image used to create the etcd service.

Examples:

image: gcr.io/etcd-development/etcd:v3.4.14

ca PEMEncodedCertificateAndKey

The ca is the root certificate authority of the PKI. It is composed of a base64 encoded crt and key.

Examples:

ca:
    crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
    key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u

extraArgs map[string]string

Extra arguments to supply to etcd. Note that the following args are not allowed:

  • name
  • data-dir
  • initial-cluster-state
  • listen-peer-urls
  • listen-client-urls
  • cert-file
  • key-file
  • trusted-ca-file
  • peer-client-cert-auth
  • peer-cert-file
  • peer-trusted-ca-file
  • peer-key-file

ClusterNetworkConfig

ClusterNetworkConfig represents kube networking configuration options.

Appears in:

# The CNI used.
cni:
    name: flannel # Name of CNI to use.
dnsDomain: cluster.local # The domain used by Kubernetes DNS.
# The pod subnet CIDR.
podSubnets:
    - 10.244.0.0/16
# The service subnet CIDR.
serviceSubnets:
    - 10.96.0.0/12

The CNI used. Composed of “name” and “url”. The “name” key only supports options of “flannel” or “custom”. URLs is only used if name is equal to “custom”. URLs should point to the set of YAML files to be deployed. An empty struct or any other name will default to bootkube’s flannel.

Examples:

cni:
    name: custom # Name of CNI to use.
    # URLs containing manifests to apply for the CNI.
    urls:
        - https://raw.githubusercontent.com/cilium/cilium/v1.8/install/kubernetes/quick-install.yaml

dnsDomain string

The domain used by Kubernetes DNS. The default is cluster.local

Examples:

dnsDomain: cluser.local

podSubnets []string

The pod subnet CIDR.

Examples:

podSubnets:
    - 10.244.0.0/16

serviceSubnets []string

The service subnet CIDR.

Examples:

serviceSubnets:
    - 10.96.0.0/12

CNIConfig

CNIConfig represents the CNI configuration options.

Appears in:

name: custom # Name of CNI to use.
# URLs containing manifests to apply for the CNI.
urls:
    - https://raw.githubusercontent.com/cilium/cilium/v1.8/install/kubernetes/quick-install.yaml

name string

Name of CNI to use.


urls []string

URLs containing manifests to apply for the CNI.


AdminKubeconfigConfig

AdminKubeconfigConfig contains admin kubeconfig settings.

Appears in:

certLifetime: 1h0m0s # Admin kubeconfig certificate lifetime (default is 1 year).

certLifetime Duration

Admin kubeconfig certificate lifetime (default is 1 year). Field format accepts any Go time.Duration format (‘1h’ for one hour, ‘10m’ for ten minutes).


MachineDisk

MachineDisk represents the options available for partitioning, formatting, and mounting extra disks.

Appears in:

- device: /dev/sdb # The name of the disk to use.
  # A list of partitions to create on the disk.
  partitions:
    - mountpoint: /var/mnt/extra # Where to mount the partition.

      # # The size of partition: either bytes or human readable representation. If `size:` is omitted, the partition is sized to occupy the full disk.

      # # Human readable representation.
      # size: 100 MB
      # # Precise value in bytes.
      # size: 1073741824

device string

The name of the disk to use.


partitions []DiskPartition

A list of partitions to create on the disk.


DiskPartition

DiskPartition represents the options for a disk partition.

Appears in:


size DiskSize

The size of partition: either bytes or human readable representation. If size: is omitted, the partition is sized to occupy the full disk.

Examples:

size: 100 MB
size: 1073741824

mountpoint string

Where to mount the partition.


MachineFile

MachineFile represents a file to write to disk.

Appears in:

- content: '...' # The contents of the file.
  permissions: 0o666 # The file's permissions in octal.
  path: /tmp/file.txt # The path of the file.
  op: append # The operation to use

content string

The contents of the file.


permissions FileMode

The file’s permissions in octal.


path string

The path of the file.


op string

The operation to use

Valid values:

  • create

  • append

  • overwrite


ExtraHost

ExtraHost represents a host entry in /etc/hosts.

Appears in:

- ip: 192.168.1.100 # The IP of the host.
  # The host alias.
  aliases:
    - example
    - example.domain.tld

ip string

The IP of the host.


aliases []string

The host alias.


Device

Device represents a network interface.

Appears in:

- interface: eth0 # The interface name.
  cidr: 192.168.2.0/24 # Assigns a static IP address to the interface.
  # A list of routes associated with the interface.
  routes:
    - network: 0.0.0.0/0 # The route's network.
      gateway: 192.168.2.1 # The route's gateway.
      metric: 1024 # The optional metric for the route.
  mtu: 1500 # The interface's MTU.

  # # Bond specific options.
  # bond:
  #     # The interfaces that make up the bond.
  #     interfaces:
  #         - eth0
  #         - eth1
  #     mode: 802.3ad # A bond option.
  #     lacpRate: fast # A bond option.

  # # Indicates if DHCP should be used to configure the interface.
  # dhcp: true

  # # DHCP specific options.
  # dhcpOptions:
  #     routeMetric: 1024 # The priority of all routes received via DHCP.

interface string

The interface name.

Examples:

interface: eth0

cidr string

Assigns a static IP address to the interface. This should be in proper CIDR notation.

Note: This option is mutually exclusive with DHCP option.

Examples:

cidr: 10.5.0.0/16

routes []Route

A list of routes associated with the interface. If used in combination with DHCP, these routes will be appended to routes returned by DHCP server.

Examples:

routes:
    - network: 0.0.0.0/0 # The route's network.
      gateway: 10.5.0.1 # The route's gateway.
    - network: 10.2.0.0/16 # The route's network.
      gateway: 10.2.0.1 # The route's gateway.

bond Bond

Bond specific options.

Examples:

bond:
    # The interfaces that make up the bond.
    interfaces:
        - eth0
        - eth1
    mode: 802.3ad # A bond option.
    lacpRate: fast # A bond option.

vlans []Vlan

VLAN specific options.


mtu int

The interface’s MTU. If used in combination with DHCP, this will override any MTU settings returned from DHCP server.


dhcp bool

Indicates if DHCP should be used to configure the interface. The following DHCP options are supported:

  • OptionClasslessStaticRoute
  • OptionDomainNameServer
  • OptionDNSDomainSearchList
  • OptionHostName

Note: This option is mutually exclusive with CIDR.

Note: To configure an interface with only IPv6 SLAAC addressing, CIDR should be set to "" and DHCP to false in order for Talos to skip configuration of addresses. All other options will still apply.

Examples:

dhcp: true

ignore bool

Indicates if the interface should be ignored (skips configuration).


dummy bool

Indicates if the interface is a dummy interface. dummy is used to specify that this interface should be a virtual-only, dummy interface.


dhcpOptions DHCPOptions

DHCP specific options. dhcp must be set to true for these to take effect.

Examples:

dhcpOptions:
    routeMetric: 1024 # The priority of all routes received via DHCP.

DHCPOptions

DHCPOptions contains options for configuring the DHCP settings for a given interface.

Appears in:

routeMetric: 1024 # The priority of all routes received via DHCP.

routeMetric uint32

The priority of all routes received via DHCP.


Bond

Bond contains the various options for configuring a bonded interface.

Appears in:

# The interfaces that make up the bond.
interfaces:
    - eth0
    - eth1
mode: 802.3ad # A bond option.
lacpRate: fast # A bond option.

interfaces []string

The interfaces that make up the bond.


arpIPTarget []string

A bond option. Please see the official kernel documentation.


mode string

A bond option. Please see the official kernel documentation.


xmitHashPolicy string

A bond option. Please see the official kernel documentation.


lacpRate string

A bond option. Please see the official kernel documentation.


adActorSystem string

A bond option. Please see the official kernel documentation.


arpValidate string

A bond option. Please see the official kernel documentation.


arpAllTargets string

A bond option. Please see the official kernel documentation.


primary string

A bond option. Please see the official kernel documentation.


primaryReselect string

A bond option. Please see the official kernel documentation.


failOverMac string

A bond option. Please see the official kernel documentation.


adSelect string

A bond option. Please see the official kernel documentation.


miimon uint32

A bond option. Please see the official kernel documentation.


updelay uint32

A bond option. Please see the official kernel documentation.


downdelay uint32

A bond option. Please see the official kernel documentation.


arpInterval uint32

A bond option. Please see the official kernel documentation.


resendIgmp uint32

A bond option. Please see the official kernel documentation.


minLinks uint32

A bond option. Please see the official kernel documentation.


lpInterval uint32

A bond option. Please see the official kernel documentation.


packetsPerSlave uint32

A bond option. Please see the official kernel documentation.


numPeerNotif uint8

A bond option. Please see the official kernel documentation.


tlbDynamicLb uint8

A bond option. Please see the official kernel documentation.


allSlavesActive uint8

A bond option. Please see the official kernel documentation.


useCarrier bool

A bond option. Please see the official kernel documentation.


adActorSysPrio uint16

A bond option. Please see the official kernel documentation.


adUserPortKey uint16

A bond option. Please see the official kernel documentation.


peerNotifyDelay uint32

A bond option. Please see the official kernel documentation.


Vlan

Vlan represents vlan settings for a device.

Appears in:


cidr string

The CIDR to use.


routes []Route

A list of routes associated with the VLAN.


dhcp bool

Indicates if DHCP should be used.


vlanId uint16

The VLAN’s ID.


Route

Route represents a network route.

Appears in:

- network: 0.0.0.0/0 # The route's network.
  gateway: 10.5.0.1 # The route's gateway.
- network: 10.2.0.0/16 # The route's network.
  gateway: 10.2.0.1 # The route's gateway.

network string

The route’s network.


gateway string

The route’s gateway.


metric uint32

The optional metric for the route.


RegistryMirrorConfig

RegistryMirrorConfig represents mirror configuration for a registry.

Appears in:

ghcr.io:
    # List of endpoints (URLs) for registry mirrors to use.
    endpoints:
        - https://registry.insecure
        - https://ghcr.io/v2/

endpoints []string

List of endpoints (URLs) for registry mirrors to use. Endpoint configures HTTP/HTTPS access mode, host name, port and path (if path is not set, it defaults to /v2).


RegistryConfig

RegistryConfig specifies auth & TLS config per registry.

Appears in:

registry.insecure:
    # The TLS configuration for the registry.
    tls:
        insecureSkipVerify: true # Skip TLS server certificate verification (not recommended).

        # # Enable mutual TLS authentication with the registry.
        # clientIdentity:
        #     crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
        #     key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u

    # # The auth configuration for this registry.
    # auth:
    #     username: username # Optional registry authentication.
    #     password: password # Optional registry authentication.

The TLS configuration for the registry.

Examples:

tls:
    # Enable mutual TLS authentication with the registry.
    clientIdentity:
        crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
        key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u
tls:
    insecureSkipVerify: true # Skip TLS server certificate verification (not recommended).

    # # Enable mutual TLS authentication with the registry.
    # clientIdentity:
    #     crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
    #     key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u

The auth configuration for this registry.

Examples:

auth:
    username: username # Optional registry authentication.
    password: password # Optional registry authentication.

RegistryAuthConfig

RegistryAuthConfig specifies authentication configuration for a registry.

Appears in:

username: username # Optional registry authentication.
password: password # Optional registry authentication.

username string

Optional registry authentication. The meaning of each field is the same with the corresponding field in .docker/config.json.


password string

Optional registry authentication. The meaning of each field is the same with the corresponding field in .docker/config.json.


auth string

Optional registry authentication. The meaning of each field is the same with the corresponding field in .docker/config.json.


identityToken string

Optional registry authentication. The meaning of each field is the same with the corresponding field in .docker/config.json.


RegistryTLSConfig

RegistryTLSConfig specifies TLS config for HTTPS registries.

Appears in:

# Enable mutual TLS authentication with the registry.
clientIdentity:
    crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
    key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u
insecureSkipVerify: true # Skip TLS server certificate verification (not recommended).

# # Enable mutual TLS authentication with the registry.
# clientIdentity:
#     crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
#     key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u

clientIdentity PEMEncodedCertificateAndKey

Enable mutual TLS authentication with the registry. Client certificate and key should be base64-encoded.

Examples:

clientIdentity:
    crt: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVSklla05DTUhGLi4u
    key: TFMwdExTMUNSVWRKVGlCRlJESTFOVEU1SUZCU1NWWkJWRVVnUzBWWkxTMHRMUzBLVFVNLi4u

ca Base64Bytes

CA registry certificate to add the list of trusted certificates. Certificate should be base64-encoded.


insecureSkipVerify bool

Skip TLS server certificate verification (not recommended).


8.4 - Platform

Metal

Below is a image to visualize the process of bootstrapping nodes.

9 - Learn More

9.1 - Philosophy

Distributed

Talos is intended to be operated in a distributed manner. That is, it is built for a high-availability dataplane first. Its etcd cluster is built in an ad-hoc manner, with each appointed node joining on its own directive (with proper security validations enforced, of course). Like as kubernetes itself, workloads are intended to be distributed across any number of compute nodes.

There should be no single points of failure, and the level of required coordination is as low as each platform allows.

Immutable

Talos takes immutability very seriously. Talos itself, even when installed on a disk, always runs from a SquashFS image, meaning that even if a directory is mounted to be writable, the image itself is never modified. All images are signed and delivered as single, versioned files. We can always run integrity checks on our image to verify that it has not been modified.

While Talos does allow a few, highly-controlled write points to the filesystem, we strive to make them as non-unique and non-critical as possible. In fact, we call the writable partition the “ephemeral” partition precisely because we want to make sure none of us ever uses it for unique, non-replicated, non-recreatable data. Thus, if all else fails, we can always wipe the disk and get back up and running.

Minimal

We are always trying to reduce and keep small Talos’ footprint. Because nearly the entire OS is built from scratch in Go, we are already starting out in a good position. We have no shell. We have no SSH. We have none of the GNU utilities, not even a rollup tool such as busybox. Everything which is included in Talos is there because it is necessary, and nothing is included which isn’t.

As a result, the OS right now produces a SquashFS image size of less than 80 MB.

Ephemeral

Everything Talos writes to its disk is either replicated or reconstructable. Since the controlplane is high availability, the loss of any node will cause neither service disruption nor loss of data. No writes are even allowed to the vast majority of the filesystem. We even call the writable partition “ephemeral” to keep this idea always in focus.

Secure

Talos has always been designed with security in mind. With its immutability, its minimalism, its signing, and its componenture, we are able to simply bypass huge classes of vulnerabilities. Moreover, because of the way we have designed Talos, we are able to take advantage of a number of additional settings, such as the recommendations of the Kernel Self Protection Project (kspp) and the complete disablement of dynamic modules.

There are no passwords in Talos. All networked communication is encrypted and key-authenticated. The Talos certificates are short-lived and automatically-rotating. Kubernetes is always constructed with its own separate PKI structure which is enforced.

Declarative

Everything which can be configured in Talos is done so through a single YAML manifest. There is no scripting and no procedural steps. Everything is defined by the one declarative YAML file. This configuration includes that of both Talos itself and the Kubernetes which it forms.

This is achievable because Talos is tightly focused to do one thing: run kubernetes, in the easiest, most secure, most reliable way it can.

9.2 - Concepts

Platform

Mode

Endpoint

Node

9.3 - Architecture

Talos is designed to be atomic in deployment and modular in composition.

It is atomic in the sense that the entirety of Talos is distributed as a single, self-contained image, which is versioned, signed, and immutable.

It is modular in the sense that it is composed of many separate components which have clearly defined gRPC interfaces which facilitate internal flexibility and external operational guarantees.

There are a number of components which comprise Talos. All of the main Talos components communicate with each other by gRPC, through a socket on the local machine. This imposes a clear separation of concerns and ensures that changes over time which affect the interoperation of components are a part of the public git record. The benefit is that each component may be iterated and changed as its needs dictate, so long as the external API is controlled. This is a key component in reducing coupling and maintaining modularity.

The File System

One of the more unique design decisions in Talos is the layout of the root file system. There are three “layers” to the Talos root file system. At its’ core the rootfs is a read-only squashfs. The squashfs is then mounted as a loop device into memory. This provides Talos with an immutable base.

The next layer is a set of tmpfs file systems for runtime specific needs. Aside from the standard pseudo file systems such as /dev, /proc, /run, /sys and /tmp, a special /system is created for internal needs. One reason for this is that we need special files such as /etc/hosts, and /etc/resolv.conf to be writable (remember that the rootfs is read-only). For example, at boot Talos will write /system/etc/hosts and the bind mount it over /etc/hosts. This means that instead of making all of /etc writable, Talos only makes very specific files writable under /etc.

All files under /system are completely reproducible. For files and directories that need to persist across boots, Talos creates overlayfs file systems. The /etc/kubernetes is a good example of this. Directories like this are overlayfs backed by an XFS file system mounted at /var.

The /var directory is owned by Kubernetes with the exception of the above overlayfs file systems. This directory is writable and used by etcd (in the case of control plane nodes), the kubelet, and the CRI (containerd).

9.4 - Components

In this section we will discuss the various components of which Talos is comprised.

Components

ComponentDescription
apidWhen interacting with Talos, the gRPC API endpoint you’re interact with directly is provided by apid. apid acts as the gateway for all component interactions and forwards the requests to routerd.
containerdAn industry-standard container runtime with an emphasis on simplicity, robustness and portability. To learn more see the containerd website.
machinedTalos replacement for the traditional Linux init-process. Specially designed to run Kubernetes and does not allow starting arbitrary user services.
networkdHandles all of the host level network configuration. Configuration is defined under the networking key
timedHandles the host time synchronization by acting as a NTP-client.
kernelThe Linux kernel included with Talos is configured according to the recommendations outlined in the Kernel Self Protection Project.
routerdResponsible for routing an incoming API request from apid to the appropriate backend (e.g. networkd, machined and timed).
trustdTo run and operate a Kubernetes cluster a certain level of trust is required. Based on the concept of a ‘Root of Trust’, trustd is a simple daemon responsible for establishing trust within the system.
udevdImplementation of eudev into machined. eudev is Gentoo’s fork of udev, systemd’s device file manager for the Linux kernel. It manages device nodes in /dev and handles all user space actions when adding or removing devices. To learn more see the Gentoo Wiki.

apid

When interacting with Talos, the gRPC api endpoint you will interact with directly is apid. Apid acts as the gateway for all component interactions. Apid provides a mechanism to route requests to the appropriate destination when running on a control plane node.

We’ll use some examples below to illustrate what apid is doing.

When a user wants to interact with a Talos component via talosctl, there are two flags that control the interaction with apid. The -e | --endpoints flag is used to denote which Talos node ( via apid ) should handle the connection. Typically this is a public facing server. The -n | --nodes flag is used to denote which Talos node(s) should respond to the request. If --nodes is not specified, the first endpoint will be used.

Note: Typically there will be an endpoint already defined in the Talos config file. Optionally, nodes can be included here as well.

For example, if a user wants to interact with machined, a command like talosctl -e cluster.talos.dev memory may be used.

$ talosctl -e cluster.talos.dev memory
NODE                TOTAL   USED   FREE   SHARED   BUFFERS   CACHE   AVAILABLE
cluster.talos.dev   7938    1768   2390   145      53        3724    6571

In this case, talosctl is interacting with apid running on cluster.talos.dev and forwarding the request to the machined api.

If we wanted to extend our example to retrieve memory from another node in our cluster, we could use the command talosctl -e cluster.talos.dev -n node02 memory.

$ talosctl -e cluster.talos.dev -n node02 memory
NODE    TOTAL   USED   FREE   SHARED   BUFFERS   CACHE   AVAILABLE
node02  7938    1768   2390   145      53        3724    6571

The apid instance on cluster.talos.dev receives the request and forwards it to apid running on node02 which forwards the request to the machined api.

We can further extend our example to retrieve memory for all nodes in our cluster by appending additional -n node flags or using a comma separated list of nodes ( -n node01,node02,node03 ):

$ talosctl -e cluster.talos.dev -n node01 -n node02 -n node03 memory
NODE     TOTAL    USED    FREE     SHARED   BUFFERS   CACHE   AVAILABLE
node01   7938     871     4071     137      49        2945    7042
node02   257844   14408   190796   18138    49        52589   227492
node03   257844   1830    255186   125      49        777     254556

The apid instance on cluster.talos.dev receives the request and forwards is to node01, node02, and node03 which then forwards the request to their local machined api.

containerd

Containerd provides the container runtime to launch workloads on Talos as well as Kubernetes.

Talos services are namespaced under the system namespace in containerd whereas the Kubernetes services are namespaced under the k8s.io namespace.

machined

A common theme throughout the design of Talos is minimalism. We believe strongly in the UNIX philosophy that each program should do one job well. The init included in Talos is one example of this, and we are calling it “machined”.

We wanted to create a focused init that had one job - run Kubernetes. To that extent, machined is relatively static in that it does not allow for arbitrary user defined services. Only the services necessary to run Kubernetes and manage the node are available. This includes:

networkd

Networkd handles all of the host level network configuration. Configuration is defined under the networking key.

By default, we attempt to issue a DHCP request for every interface on the server. This can be overridden by supplying one of the following kernel arguments:

  • talos.network.interface.ignore - specify a list of interfaces to skip discovery on
  • ip - ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip> as documented in the kernel here
    • ex, ip=10.0.0.99:::255.0.0.0:control-1:eth0:off:10.0.0.1

timed

Timed handles the host time synchronization.

kernel

The Linux kernel included with Talos is configured according to the recommendations outlined in the Kernel Self Protection Project (KSSP).

trustd

Security is one of the highest priorities within Talos. To run a Kubernetes cluster a certain level of trust is required to operate a cluster. For example, orchestrating the bootstrap of a highly available control plane requires the distribution of sensitive PKI data.

To that end, we created trustd. Based on the concept of a Root of Trust, trustd is a simple daemon responsible for establishing trust within the system. Once trust is established, various methods become available to the trustee. It can, for example, accept a write request from another node to place a file on disk.

Additional methods and capability will be added to the trustd component in support of new functionality in the rest of the Talos environment.

udevd

Udevd handles the kernel device notifications and sets up the necessary links in /dev.

9.5 - Upgrades

Talos

The upgrade process for Talos, like everything else, begins with an API call. This call tells a node the installer image to use to perform the upgrade. Each Talos version corresponds to an installer with the same version, such that the version of the installer is the version of Talos which will be installed.

Because Talos is image based, even at run-time, upgrading Talos is almost exactly the same set of operations as installing Talos, with the difference that the system has already been initialized with a configuration.

An upgrade makes use of an A-B image scheme in order to facilitate rollbacks. This scheme retains the one previous Talos kernel and OS image following each upgrade. If an upgrade fails to boot, Talos will roll back to the previous version. Likewise, Talos may be manually rolled back via API (or talosctl rollback). This will simply update the boot reference and reboot.

An upgrade can preserve data or not. If Talos is told to NOT preserve data, it will wipe its ephemeral partition, remove itself from the etcd cluster (if it is a control node), and generally make itself as pristine as is possible. There are likely to be changes to the default option here over time, so if your setup has a preference to one way or the other, it is better to specify it explicitly, but we try to always be “safe” with this setting.

Sequence

When a Talos node receives the upgrade command, the first thing it does is cordon itself in kubernetes, to avoid receiving any new workload. It then starts to drain away its existing workload.

NOTE: If any of your workloads is sensitive to being shut down ungracefully, be sure to use the lifecycle.preStop Pod spec.

Once all of the workload Pods are drained, Talos will start shutting down its internal processes. If it is a control node, this will include etcd. If preserve is not enabled, Talos will even leave etcd membership. (Don’t worry about this; we make sure the etcd cluster is healthy and that it will remain healthy after our node departs, before we allow this to occur.)

Once all the processes are stopped and the services are shut down, all of the filesystems will be unmounted. This allows Talos to produce a very clean upgrade, as close as possible to a pristine system. We verify the disk and then perform the actual image upgrade.

Finally, we tell the bootloader to boot once with the new kernel and OS image. Then we reboot.

After the node comes back up and Talos verifies itself, it will make permanent the bootloader change, rejoin the cluster, and finally uncordon itself to receive new workloads.

FAQs

Q. What happens if an upgrade fails?

A. There are many potential ways an upgrade can fail, but we always try to do the safe thing.

The most common first failure is an invalid installer image reference. In this case, Talos will fail to download the upgraded image and will abort the upgrade.

Sometimes, Talos is unable to successfully kill off all of the disk access points, in which case it cannot safely unmount all filesystems to effect the upgrade. In this case, it will abort the upgrade and reboot.

It is possible (especially with test builds) that the upgraded Talos system will fail to start. In this case, the node will be rebooted, and the bootloader will automatically use the previous Talos kernel and image, thus effectively aborting the upgrade.

Lastly, it is possible that Talos itself will upgrade successfully, start up, and rejoin the cluster but your workload will fail to run on it, for whatever reason. This is when you would use the talosctl rollback command to revert back to the previous Talos version.

Q. Can upgrades be scheduled?

A. We provide the Talos Controller Manager to coordinate upgrades of a cluster. Additionally, because the upgrade sequence is API-driven, you can easily tie this in to your own business logic to schedule and coordinate your upgrades.

Q. Can the upgrade process be observed?

A. The Talos Controller Manager does this internally, watching the logs of the node being upgraded, using the streaming log API of Talos.

You can do the same thing using the talosctl logs --follow machined command.

Q. Are worker node upgrades handled differently from control plane node upgrades?

A. Short answer: no.

Long answer: Both node types follow the same set procedure. However, since control plane nodes run additional services, such as etcd, there are some extra steps and checks performed on them. From the user’s standpoint, however, the processes are identical.

There are also additional restrictions on upgrading control plane nodes. For instance, Talos will refuse to upgrade a control plane node if that upgrade will cause a loss of quorum for etcd. This can generally be worked around by setting preserve to true.

Q. Will an upgrade try to do the whole cluster at once? Can I break my cluster by upgrading everything?

A. No.

Nothing prevents the user from sending any number of near-simultaneous upgrades to each node of the cluster. While most people would not attempt to do this, it may be the desired behaviour in certain situations.

If, however, multiple control plane nodes are asked to upgrade at the same time, Talos will protect itself by making sure only one control plane node upgrades at any time, through its checking of etcd quorum. A lease is taken out by the winning control plane node, and no other control plane node is allowed to execute the upgrade until the lease is released and the etcd cluster is healthy and will be healthy when the next node performs its upgrade.

Q. Is there an operator or controller which will keep my nodes updated automatically?

A. Yes.

We provide the Talos Controller Manager to perform this maintenance in a simple, controllable fashion.

Upgrade Notes for Talos 0.8

Talos 0.8 comes with new KSPP requirements compliance check.

Following kernel arguments are mandatory for Talos to boot successfully:

  • init_on_alloc=1: required by KSPP
  • init_on_free=1: required by KSPP
  • slab_nomerge: required by KSPP
  • pti=on: required by KSPP

Talos installer automatically injects those args while installing Talos, so this mostly is required when PXE booting Talos.

Kubernetes

Kubernetes upgrades with Talos are covered in a separate document.

9.6 - FAQs

How is Talos different from other container optimized Linux distros?

Talos shares a lot of attributes with other distros, but there are some important differences. Talos integrates tightly with Kubernetes, and is not meant to be a general-purpose operating system. The most important difference is that Talos is fully controlled by an API via a gRPC interface, instead of an ordinary shell. We don’t ship SSH, and there is no console access. Removing components such as these has allowed us to dramatically reduce the footprint of Talos, and in turn, improve a number of other areas like security, predictability, reliability, and consistency across platforms. It’s a big change from how operating systems have been managed in the past, but we believe that API-driven OSes are the future.

Why no shell or SSH?

Since Talos is fully API-driven, all maintenance and debugging operations should be possible via the OS API. We would like for Talos users to start thinking about what a “machine” is in the context of a Kubernetes cluster. That is, that a Kubernetes cluster can be thought of as one massive machine, and the nodes are merely additional, undifferentiated resources. We don’t want humans to focus on the nodes, but rather on the machine that is the Kubernetes cluster. Should an issue arise at the node level, talosctl should provide the necessary tooling to assist in the identification, debugging, and remedation of the issue. However, the API is based on the Principle of Least Privilege, and exposes only a limited set of methods. We envision Talos being a great place for the application of control theory in order to provide a self-healing platform.

Why the name “Talos”?

Talos was an automaton created by the Greek God of the forge to protect the island of Crete. He would patrol the coast and enforce laws throughout the land. We felt it was a fitting name for a security focused operating system designed to run Kubernetes.

9.7 - talosctl

The talosctl tool packs a lot of power into a small package. It acts as a reference implementation for the Talos API, but it also handles a lot of conveniences for the use of Talos and its clusters.

Video Walkthrough

To see some live examples of talosctl usage, view the following video:

Client Configuration

Talosctl configuration is located in $XDG_CONFIG_HOME/talos/config.yaml if $XDG_CONFIG_HOME is defined. Otherwise it is in $HOME/.talos/config. The location can always be overridden by the TALOSCONFIG environment variable or the --talosconfig parameter.

Like kubectl, talosctl uses the concept of configuration contexts, so any number of Talos clusters can be managed with a single configuration file. Unlike kubectl, it also comes with some intelligent tooling to manage the merging of new contexts into the config. The default operation is a non-destructive merge, where if a context of the same name already exists in the file, the context to be added is renamed by appending an index number. You can easily overwrite instead, as well. See the talosctl config help for more information.

Endpoints and Nodes

Endpoints and Nodes

The endpoints are the communication endpoints to which the client directly talks. These can be load balancers, DNS hostnames, a list of IPs, etc. Further, if multiple endpoints are specified, the client will automatically load balance and fail over between them. In general, it is recommended that these point to the set of control plane nodes, either directly or through a reverse proxy or load balancer.

Each endpoint will automatically proxy requests destined to another node through it, so it is not necessary to change the endpoint configuration just because you wish to talk to a different node within the cluster.

Endpoints do, however, need to be members of the same Talos cluster as the target node, because these proxied connections reply on certificate-based authentication.

The node is the target node on which you wish to perform the API call. While you can configure the target node (or even set of target nodes) inside the ’talosctl’ configuration file, it is often useful to simply and explicitly declare the target node(s) using the -n or --nodes command-line parameter.

Keep in mind, when specifying nodes that their IPs and/or hostnames are as seen by the endpoint servers, not as from the client. This is because all connections are proxied first through the endpoints.

Kubeconfig

The configuration for accessing a Talos Kubernetes cluster is obtained with talosctl. By default, talosctl will safely merge the cluster into the default kubeconfig. Like talosctl itself, in the event of a naming conflict, the new context name will be index-appended before insertion. The --force option can be used to overwrite instead.

You can also specify an alternate path by supplying it as a positional parameter.

Thus, like Talos clusters themselves, talosctl makes it easy to manage any number of kubernetes clusters from the same workstation.

Commands

Please see the CLI reference for the entire list of commands which are available from talosctl.