This is the multi-page printable view of this section. Click here to print.
Bare Metal Platforms
- 1: Digital Rebar
- 2: Equinix Metal
- 3: ISO
- 4: Matchbox
- 5: Network Configuration
- 6: PXE
- 7: SecureBoot
1 - Digital Rebar
Prerequisites
- 3 nodes (please see hardware requirements)
- Loadbalancer
- Digital Rebar Server
- Talosctl access (see talosctl setup)
Creating a Cluster
In this guide we will create an Kubernetes cluster with 1 worker node, and 2 controlplane nodes. We assume an existing digital rebar deployment, and some familiarity with iPXE.
We leave it up to the user to decide if they would like to use static networking, or DHCP. The setup and configuration of DHCP will not be covered.
Create the Machine Configuration Files
Generating Base Configurations
Using the DNS name of the load balancer, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-k8s-metal-tutorial https://<load balancer IP or DNS>:<port>
created controlplane.yaml
created worker.yaml
created talosconfig
The loadbalancer is used to distribute the load across multiple controlplane nodes. This isn’t covered in detail, because we assume some loadbalancing knowledge before hand. If you think this should be added to the docs, please create a issue.
At this point, you can modify the generated configs to your liking.
Optionally, you can specify --config-patch
with RFC6902 jsonpatch which will be applied during the config generation.
Validate the Configuration Files
$ talosctl validate --config controlplane.yaml --mode metal
controlplane.yaml is valid for metal mode
$ talosctl validate --config worker.yaml --mode metal
worker.yaml is valid for metal mode
Publishing the Machine Configuration Files
Digital Rebar has a built-in fileserver, which means we can use this feature to expose the talos configuration files.
We will place controlplane.yaml
, and worker.yaml
into Digital Rebar file server by using the drpcli
tools.
Copy the generated files from the step above into your Digital Rebar installation.
drpcli file upload <file>.yaml as <file>.yaml
Replacing <file>
with controlplane or worker.
Download the boot files
Download a recent version of boot.tar.gz
from github.
Upload to DRB:
$ drpcli isos upload boot.tar.gz as talos.tar.gz
{
"Path": "talos.tar.gz",
"Size": 96470072
}
We have some Digital Rebar example files in the Git repo you can use to provision Digital Rebar with drpcli.
To apply these configs you need to create them, and then apply them as follow:
$ drpcli bootenvs create talos
{
"Available": true,
"BootParams": "",
"Bundle": "",
"Description": "",
"Documentation": "",
"Endpoint": "",
"Errors": [],
"Initrds": [],
"Kernel": "",
"Meta": {},
"Name": "talos",
"OS": {
"Codename": "",
"Family": "",
"IsoFile": "",
"IsoSha256": "",
"IsoUrl": "",
"Name": "",
"SupportedArchitectures": {},
"Version": ""
},
"OnlyUnknown": false,
"OptionalParams": [],
"ReadOnly": false,
"RequiredParams": [],
"Templates": [],
"Validated": true
}
drpcli bootenvs update talos - < bootenv.yaml
You need to do this for all files in the example directory. If you don’t have access to the
drpcli
tools you can also use the webinterface.
It’s important to have a corresponding SHA256 hash matching the boot.tar.gz
Bootenv BootParams
We’re using some of Digital Rebar built in templating to make sure the machine gets the correct role assigned.
talos.platform=metal talos.config={{ .ProvisionerURL }}/files/{{.Param \"talos/role\"}}.yaml"
This is why we also include a params.yaml
in the example directory to make sure the role is set to one of the following:
- controlplane
- worker
The {{.Param \"talos/role\"}}
then gets populated with one of the above roles.
Boot the Machines
In the UI of Digital Rebar you need to select the machines you want to provision. Once selected, you need to assign to following:
- Profile
- Workflow
This will provision the Stage and Bootenv with the talos values. Once this is done, you can boot the machine.
Bootstrap Etcd
To configure talosctl
we will need the first control plane node’s IP:
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
2 - Equinix Metal
You can create a Talos Linux cluster on Equinix Metal in a variety of ways, such as through the EM web UI, the metal
command line too, or through PXE booting.
Talos Linux is a supported OS install option on Equinix Metal, so it’s an easy process.
Regardless of the method, the process is:
- Create a DNS entry for your Kubernetes endpoint.
- Generate the configurations using
talosctl
. - Provision your machines on Equinix Metal.
- Push the configurations to your servers (if not done as part of the machine provisioning).
- configure your Kubernetes endpoint to point to the newly created control plane nodes
- bootstrap the cluster
Define the Kubernetes Endpoint
There are a variety of ways to create an HA endpoint for the Kubernetes cluster. Some of the ways are:
- DNS
- Load Balancer
- BGP
Whatever way is chosen, it should result in an IP address/DNS name that routes traffic to all the control plane nodes. We do not know the control plane node IP addresses at this stage, but we should define the endpoint DNS entry so that we can use it in creating the cluster configuration. After the nodes are provisioned, we can use their addresses to create the endpoint A records, or bind them to the load balancer, etc.
Create the Machine Configuration Files
Generating Configurations
Using the DNS name of the loadbalancer defined above, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-k8s-em-tutorial https://<load balancer IP or DNS>:<port>
created controlplane.yaml
created worker.yaml
created talosconfig
The
port
used above should be 6443, unless your load balancer maps a different port to port 6443 on the control plane nodes.
Validate the Configuration Files
talosctl validate --config controlplane.yaml --mode metal
talosctl validate --config worker.yaml --mode metal
Note: Validation of the install disk could potentially fail as validation is performed on your local machine and the specified disk may not exist.
Passing in the configuration as User Data
You can use the metadata service provide by Equinix Metal to pass in the machines configuration. It is required to add a shebang to the top of the configuration file.
The convention we use is #!talos
.
Provision the machines in Equinix Metal
Using the Equinix Metal UI
Simply select the location and type of machines in the Equinix Metal web interface.
Select Talos as the Operating System, then select the number of servers to create, and name them (in lowercase only.)
Under optional settings, you can optionally paste in the contents of controlplane.yaml
that was generated, above (ensuring you add a first line of #!talos
).
You can repeat this process to create machines of different types for control plane and worker nodes (although you would pass in worker.yaml
for the worker nodes, as user data).
If you did not pass in the machine configuration as User Data, you need to provide it to each machine, with the following command:
talosctl apply-config --insecure --nodes <Node IP> --file ./controlplane.yaml
Creating a Cluster via the Equinix Metal CLI
This guide assumes the user has a working API token,and the Equinix Metal CLI installed.
Because Talos Linux is a supported operating system, Talos Linux machines can be provisioned directly via the CLI, using the -O talos_v1
parameter (for Operating System).
Note: Ensure you have prepended
#!talos
to thecontrolplane.yaml
file.
metal device create \
--project-id $PROJECT_ID \
--facility $FACILITY \
--operating-system "talos_v1" \
--plan $PLAN\
--hostname $HOSTNAME\
--userdata-file controlplane.yaml
e.g. metal device create -p <projectID> -f da11 -O talos_v1 -P c3.small.x86 -H steve.test.11 --userdata-file ./controlplane.yaml
Repeat this to create each control plane node desired: there should usually be 3 for a HA cluster.
Network Booting via iPXE
You may install Talos over the network using TFTP and iPXE. You would first need a working TFTP and iPXE server.
In general this requires a Talos kernel vmlinuz and initramfs. These assets can be downloaded from a given release.
PXE Boot Kernel Parameters
The following is a list of kernel parameters required by Talos:
talos.platform
: set this toequinixMetal
init_on_alloc=1
: required by KSPPslab_nomerge
: required by KSPPpti=on
: required by KSPP
Create the Control Plane Nodes
metal device create \
--project-id $PROJECT_ID \
--facility $FACILITY \
--ipxe-script-url $PXE_SERVER \
--operating-system "custom_ipxe" \
--plan $PLAN\
--hostname $HOSTNAME\
--userdata-file controlplane.yaml
Note: Repeat this to create each control plane node desired: there should usually be 3 for a HA cluster.
Create the Worker Nodes
metal device create \
--project-id $PROJECT_ID \
--facility $FACILITY \
--ipxe-script-url $PXE_SERVER \
--operating-system "custom_ipxe" \
--plan $PLAN\
--hostname $HOSTNAME\
--userdata-file worker.yaml
Update the Kubernetes endpoint
Now our control plane nodes have been created, and we know their IP addresses, we can associate them with the Kubernetes endpoint.
Configure your load balancer to route traffic to these nodes, or add A
records to your DNS entry for the endpoint, for each control plane node.
e.g.
host endpoint.mydomain.com
endpoint.mydomain.com has address 145.40.90.201
endpoint.mydomain.com has address 147.75.109.71
endpoint.mydomain.com has address 145.40.90.177
Bootstrap Etcd
Set the endpoints
and nodes
for talosctl
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
This only needs to be issued to one control plane node.
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
3 - ISO
Talos can be installed on bare-metal machine using an ISO image.
ISO images for amd64
and arm64
architectures are available on the Talos releases page.
Talos doesn’t install itself to disk when booted from an ISO until the machine configuration is applied.
Please follow the getting started guide for the generic steps on how to install Talos.
Note: If there is already a Talos installation on the disk, the machine will boot into that installation when booting from a Talos ISO. The boot order should prefer disk over ISO, or the ISO should be removed after the installation to make Talos boot from disk.
See kernel parameters reference for the list of kernel parameters supported by Talos.
There are two flavors of ISO images available:
metal-<arch>.iso
supports booting on BIOS and UEFI systems (for x86, UEFI only for arm64)secureboot-metal-<arch>.iso
supports booting on only UEFI systems in SecureBoot mode
4 - Matchbox
Creating a Cluster
In this guide we will create an HA Kubernetes cluster with 3 worker nodes. We assume an existing load balancer, matchbox deployment, and some familiarity with iPXE.
We leave it up to the user to decide if they would like to use static networking, or DHCP. The setup and configuration of DHCP will not be covered.
Create the Machine Configuration Files
Generating Base Configurations
Using the DNS name of the load balancer, generate the base configuration files for the Talos machines:
$ talosctl gen config talos-k8s-metal-tutorial https://<load balancer IP or DNS>:<port>
created controlplane.yaml
created worker.yaml
created talosconfig
At this point, you can modify the generated configs to your liking.
Optionally, you can specify --config-patch
with RFC6902 jsonpatch which will be applied during the config generation.
Validate the Configuration Files
$ talosctl validate --config controlplane.yaml --mode metal
controlplane.yaml is valid for metal mode
$ talosctl validate --config worker.yaml --mode metal
worker.yaml is valid for metal mode
Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (talos.config
) must be used to inform Talos about where it should retrieve its configuration file.
To keep things simple we will place controlplane.yaml
, and worker.yaml
into Matchbox’s assets
directory.
This directory is automatically served by Matchbox.
Create the Matchbox Configuration Files
The profiles we will create will reference vmlinuz
, and initramfs.xz
.
Download these files from the release of your choice, and place them in /var/lib/matchbox/assets
.
Profiles
Control Plane Nodes
{
"id": "control-plane",
"name": "control-plane",
"boot": {
"kernel": "/assets/vmlinuz",
"initrd": ["/assets/initramfs.xz"],
"args": [
"initrd=initramfs.xz",
"init_on_alloc=1",
"slab_nomerge",
"pti=on",
"console=tty0",
"console=ttyS0",
"printk.devkmsg=on",
"talos.platform=metal",
"talos.config=http://matchbox.talos.dev/assets/controlplane.yaml"
]
}
}
Note: Be sure to change
http://matchbox.talos.dev
to the endpoint of your matchbox server.
Worker Nodes
{
"id": "default",
"name": "default",
"boot": {
"kernel": "/assets/vmlinuz",
"initrd": ["/assets/initramfs.xz"],
"args": [
"initrd=initramfs.xz",
"init_on_alloc=1",
"slab_nomerge",
"pti=on",
"console=tty0",
"console=ttyS0",
"printk.devkmsg=on",
"talos.platform=metal",
"talos.config=http://matchbox.talos.dev/assets/worker.yaml"
]
}
}
Groups
Now, create the following groups, and ensure that the selector
s are accurate for your specific setup.
{
"id": "control-plane-1",
"name": "control-plane-1",
"profile": "control-plane",
"selector": {
...
}
}
{
"id": "control-plane-2",
"name": "control-plane-2",
"profile": "control-plane",
"selector": {
...
}
}
{
"id": "control-plane-3",
"name": "control-plane-3",
"profile": "control-plane",
"selector": {
...
}
}
{
"id": "default",
"name": "default",
"profile": "default"
}
Boot the Machines
Now that we have our configuration files in place, boot all the machines. Talos will come up on each machine, grab its configuration file, and bootstrap itself.
Bootstrap Etcd
Set the endpoints
and nodes
:
talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
Bootstrap etcd
:
talosctl --talosconfig talosconfig bootstrap
Retrieve the kubeconfig
At this point we can retrieve the admin kubeconfig
by running:
talosctl --talosconfig talosconfig kubeconfig .
5 - Network Configuration
By default, Talos will run DHCP client on all interfaces which have a link, and that might be enough for most of the cases. If some advanced network configuration is required, it can be done via the machine configuration file.
But sometimes it is required to apply network configuration even before the machine configuration can be fetched from the network.
Kernel Command Line
Talos supports some kernel command line parameters to configure network before the machine configuration is fetched.
Note: Kernel command line parameters are not persisted after Talos installation, so proper network configuration should be done via the machine configuration.
Address, default gateway and DNS servers can be configured via ip=
kernel command line parameter:
ip=172.20.0.2::172.20.0.1:255.255.255.0::eth0.100:::::
Bonding can be configured via bond=
kernel command line parameter:
bond=bond0:eth0,eth1:balance-rr
VLANs can be configured via vlan=
kernel command line parameter:
vlan=eth0.100:eth0
See kernel parameters reference for more details.
Platform Network Configuration
Some platforms (e.g. AWS, Google Cloud, etc.) have their own network configuration mechanisms, which can be used to perform the initial network configuration.
There is no such mechanism for bare-metal platforms, so Talos provides a way to use platform network config on the metal
platform to submit the initial network configuration.
The platform network configuration is a YAML document which contains resource specifications for various network resources.
For the metal
platform, the interactive dashboard can be used to edit the platform network configuration.
The current value of the platform network configuration can be retrieved using the MetaKeys
resource (key 0xa
):
talosctl get meta 0xa
The platform network configuration can be updated using the talosctl meta
command for the running node:
talosctl meta write 0xa '{"externalIPs": ["1.2.3.4"]}'
talosctl meta delete 0xa
The initial platform network configuration for the metal
platform can be also included into the generated Talos image:
docker run --rm -i ghcr.io/siderolabs/imager:v1.5.5 iso --arch amd64 --tar-to-stdout --meta 0xa='{...}' | tar xz
docker run --rm -i --privileged ghcr.io/siderolabs/imager:v1.5.5 image --platform metal --arch amd64 --tar-to-stdout --meta 0xa='{...}' | tar xz
The platform network configuration gets merged with other sources of network configuration, the details can be found in the network resources guide.
6 - PXE
Talos can be installed on bare-metal using PXE service. There are two more detailed guides for PXE booting using Matchbox and Digital Rebar.
This guide describes generic steps for PXE booting Talos on bare-metal.
First, download the vmlinuz
and initramfs
assets from the Talos releases page.
Set up the machines to PXE boot from the network (usually by setting the boot order in the BIOS).
There might be options specific to the hardware being used, booting in BIOS or UEFI mode, using iPXE, etc.
Talos requires the following kernel parameters to be set on the initial boot:
talos.platform=metal
slab_nomerge
pti=on
When booted from the network without machine configuration, Talos will start in maintenance mode.
Please follow the getting started guide for the generic steps on how to install Talos.
See kernel parameters reference for the list of kernel parameters supported by Talos.
Note: If there is already a Talos installation on the disk, the machine will boot into that installation when booting from network. The boot order should prefer disk over network.
Talos can automatically fetch the machine configuration from the network on the initial boot using talos.config
kernel parameter.
A metadata service (HTTP service) can be implemented to deliver customized configuration to each node for example by using the MAC address of the node:
talos.config=https://metadata.service/talos/config?mac=${mac}
Note: The
talos.config
kernel parameter supports other substitution variables, see kernel parameters reference for the full list.
7 - SecureBoot
Talos now supports booting on UEFI systems in SecureBoot mode. When combined with TPM-based disk encryption, this provides Trusted Boot experience.
Note: SecureBoot is not supported on x86 platforms in BIOS mode.
The implementation is using systemd-boot as a boot menu implementation, while the
Talos kernel, initramfs and cmdline arguments are combined into the Unified Kernel Image (UKI) format.
UEFI firmware loads the systemd-boot
bootloader, which then loads the UKI image.
Both systemd-boot
and Talos UKI
image are signed with the key, which is enrolled into the UEFI firmware.
As Talos Linux is fully contained in the UKI image, the full operating system is verified and booted by the UEFI firmware.
Note: There is no support at the moment to upgrade non-UKI (GRUB-based) Talos installation to use UKI/SecureBoot, so a fresh installation is required.
SecureBoot with Sidero Labs Images
Sidero Labs provides Talos images signed with the Sidero Labs SecureBoot key via Image Factory.
Note: The SecureBoot images are available for Talos releases starting from
v1.5.0
.
The easiest way to get started with SecureBoot is to download the ISO, and boot it on a UEFI-enabled system which has SecureBoot enabled in setup mode.
The ISO bootloader will enroll the keys in the UEFI firmware, and boot the Talos Linux in SecureBoot mode.
The install should performed using SecureBoot installer (put it Talos machine configuration): factory.talos.dev/installer-secureboot/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba:v1.5.5
.
Note: SecureBoot images can also be generated with custom keys.
Booting Talos Linux in SecureBoot Mode
In this guide we will use the ISO image to boot Talos Linux in SecureBoot mode, followed by submitting machine configuration to the machine in maintenance mode. We will use one the ways to generate and submit machine configuration to the node, please refer to the Production Notes for the full guide.
First, make sure SecureBoot is enabled in the UEFI firmware.
For the first boot, the UEFI firmware should be in the setup mode, so that the keys can be enrolled into the UEFI firmware automatically.
If the UEFI firmware does not support automatic enrollment, you may need to hit Esc to force the boot menu to appear, and select the Enroll Secure Boot keys: auto
option.
Note: There are other ways to enroll the keys into the UEFI firmware, but this is out of scope of this guide.
Once Talos is running in maintenance mode, verify that secure boot is enabled:
$ talosctl -n <IP> get securitystate --insecure
NODE NAMESPACE TYPE ID VERSION SECUREBOOT
runtime SecurityState securitystate 1 true
Now we will generate the machine configuration for the node supplying the installer-secureboot
container image, and applying the patch to enable TPM-based disk encryption (requires TPM 2.0):
# tpm-disk-encryption.yaml
machine:
systemDiskEncryption:
ephemeral:
provider: luks2
keys:
- slot: 0
tpm: {}
state:
provider: luks2
keys:
- slot: 0
tpm: {}
Generate machine configuration:
talosctl gen config <cluster-name> https://<endpoint>:6443 --install-image=factory.talos.dev/installer-secureboot/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba:v1.5.5 --install-disk=/dev/sda --config-patch @tpm-disk-encryption.yaml
Apply machine configuration to the node:
talosctl -n <IP> apply-config --insecure -f controlplane.yaml
Talos will perform the installation to the disk and reboot the node. Please make sure that the ISO image is not attached to the node anymore, otherwise the node will boot from the ISO image again.
Once the node is rebooted, verify that the node is running in secure boot mode:
talosctl -n <IP> --talosconfig=talosconfig get securitystate
Upgrading Talos Linux
Any change to the boot asset (kernel, initramfs, kernel command line) requires the UKI to be regenerated and the installer image to be rebuilt.
Follow the steps above to generate new installer image updating the boot assets: use new Talos version, add a system extension, or modify the kernel command line.
Once the new installer
image is pushed to the registry, upgrade the node using the new installer image.
It is important to preserve the UKI signing key and the PCR signing key, otherwise the node will not be able to boot with the new UKI and unlock the encrypted partitions.
Disk Encryption with TPM
When encrypting the disk partition for the first time, Talos Linux generates a random disk encryption key and seals (encrypts) it with the TPM device. The TPM unlock policy is configured to trust the expected policy signed by the PCR signing key. This way TPM unlocking doesn’t depend on the exact PCR measurements, but rather on the expected policy signed by the PCR signing key and the state of SecureBoot (PCR 7 measurement, including secureboot status and the list of enrolled keys).
When the UKI image is generated, the UKI is measured and expected measurements are combined into TPM unlock policy and signed with the PCR signing key.
During the boot process, systemd-stub
component of the UKI performs measurements of the UKI sections into the TPM device.
Talos Linux during the boot appends to the PCR register the measurements of the boot phases, and once the boot reaches the point of mounting the encrypted disk partition,
the expected signed policy from the UKI is matched against measured values to unlock the TPM, and TPM unseals the disk encryption key which is then used to unlock the disk partition.
During the upgrade, as long as the new UKI is contains PCR policy signed with the same PCR signing key, and SecureBoot state has not changed the disk partition will be unlocked successfully.
Disk encryption is also tied to the state of PCR register 7, so that it unlocks only if SecureBoot is enabled and the set of enrolled keys hasn’t changed.
Other Boot Options
Unified Kernel Image (UKI) is a UEFI-bootable image which can be booted directly from the UEFI firmware skipping the systemd-boot
bootloader.
In network boot mode, the UKI can be used directly as well, as it contains the full set of boot assets required to boot Talos Linux.
When SecureBoot is enabled, the UKI image ignores any kernel command line arguments passed to it, but rather uses the kernel command line arguments embedded into the UKI image itself. If kernel command line arguments need to be changed, the UKI image needs to be rebuilt with the new kernel command line arguments.
SecureBoot with Custom Keys
Generating the Keys
Talos requires two set of keys to be used for the SecureBoot process:
- SecureBoot key is used to sign the boot assets and it is enrolled into the UEFI firmware.
- PCR Signing Key is used to sign the TPM policy, which is used to seal the disk encryption key.
The same key might be used for both, but it is recommended to use separate keys for each purpose.
Talos provides a utility to generate the keys, but existing PKI infrastructure can be used as well:
$ talosctl gen secureboot uki --common-name "SecureBoot Key"
writing _out/uki-signing-cert.pem
writing _out/uki-signing-key.pem
The generated certificate and private key are written to disk in PEM-encoded format (RSA 4096-bit key).
PCR signing key can be generated with:
$ talosctl gen secureboot pcr
writing _out/pcr-signing-key.pem
The file containing the private key is written to disk in PEM-encoded format (RSA 2048-bit key).
Optionally, UEFI automatic key enrollment database can be generated using the _out/uki-signing-*
files as input:
$ talosctl gen secureboot database
writing _out/db.auth
writing _out/KEK.auth
writing _out/PK.auth
These files can be used to enroll the keys into the UEFI firmware automatically when booting from a SecureBoot ISO while UEFI firmware is in the setup mode.
Generating the SecureBoot Assets
Once the keys are generated, they can be used to sign the Talos boot assets to generate required ISO images, PXE boot assets, disk images, installer containers, etc. In this guide we will generate a SecureBoot ISO image and an installer image.
$ docker run --rm -t -v $PWD/_out:/secureboot:ro -v $PWD/_out:/out ghcr.io/siderolabs/imager:v1.5.5 secureboot-iso
profile ready:
arch: amd64
platform: metal
secureboot: true
version: v1.5.5
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
sdStub:
path: /usr/install/amd64/systemd-stub.efi
sdBoot:
path: /usr/install/amd64/systemd-boot.efi
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.5.0-alpha.3-35-ge0f383598-dirty
secureboot:
signingKeyPath: /secureboot/uki-signing-key.pem
signingCertPath: /secureboot/uki-signing-cert.pem
pcrSigningKeyPath: /secureboot/pcr-signing-key.pem
pcrPublicKeyPath: /secureboot/pcr-signing-public-key.pem
platformKeyPath: /secureboot/PK.auth
keyExchangeKeyPath: /secureboot/KEK.auth
signatureKeyPath: /secureboot/db.auth
output:
kind: iso
outFormat: raw
skipped initramfs rebuild (no system extensions)
kernel command line: talos.platform=metal console=ttyS0 console=tty0 init_on_alloc=1 slab_nomerge pti=on consoleblank=0 nvme_core.io_timeout=4294967295 printk.devkmsg=on ima_template=ima-ng ima_appraise=fix ima_hash=sha512 lockdown=confidentiality
UKI ready
ISO ready
output asset path: /out/metal-amd64-secureboot.iso
Next, the installer image should be generated to install Talos to disk on a SecureBoot-enabled system:
$ docker run --rm -t -v $PWD/_out:/secureboot:ro -v $PWD/_out:/out ghcr.io/siderolabs/imager:v1.5.5 secureboot-installer
profile ready:
arch: amd64
platform: metal
secureboot: true
version: v1.5.5
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
sdStub:
path: /usr/install/amd64/systemd-stub.efi
sdBoot:
path: /usr/install/amd64/systemd-boot.efi
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.5.5
secureboot:
signingKeyPath: /secureboot/uki-signing-key.pem
signingCertPath: /secureboot/uki-signing-cert.pem
pcrSigningKeyPath: /secureboot/pcr-signing-key.pem
pcrPublicKeyPath: /secureboot/pcr-signing-public-key.pem
platformKeyPath: /secureboot/PK.auth
keyExchangeKeyPath: /secureboot/KEK.auth
signatureKeyPath: /secureboot/db.auth
output:
kind: installer
outFormat: raw
skipped initramfs rebuild (no system extensions)
kernel command line: talos.platform=metal console=ttyS0 console=tty0 init_on_alloc=1 slab_nomerge pti=on consoleblank=0 nvme_core.io_timeout=4294967295 printk.devkmsg=on ima_template=ima-ng ima_appraise=fix ima_hash=sha512 lockdown=confidentiality
UKI ready
installer container image ready
output asset path: /out/installer-amd64-secureboot.tar
The generated container image should be pushed to some container registry which Talos can access during the installation, e.g.:
crane push _out/installer-amd64-secureboot.tar ghcr.io/<user>/installer-amd64-secureboot:v1.5.5
The generated ISO and installer images might be further customized with system extensions, extra kernel command line arguments, etc.