Skip to main content

Deploying tbot with Workload Identity on AWS

This guide walks you through deploying tbot on an Amazon EC2 instance and setting up Machine and Workload Identity (MWI). By the end, you'll have a working tbot service that issues SPIFFE-compatible credentials to workloads running on your EC2 instance.

Prerequisites

  • A running Teleport cluster. If you do not have one, read Getting Started.

  • The tctl and tsh clients.

    Installing tctl and tsh clients
    1. Determine the version of your Teleport cluster. The tctl and tsh clients must be at most one major version behind your Teleport cluster version. Send a GET request to the Proxy Service at /v1/webapi/find and use a JSON query tool to obtain your cluster version. Replace teleport.example.com:443 with the web address of your Teleport Proxy Service:

      TELEPORT_DOMAIN=teleport.example.com:443
      TELEPORT_VERSION="$(curl -s https://$TELEPORT_DOMAIN/v1/webapi/find | jq -r '.server_version')"
    2. Follow the instructions for your platform to install tctl and tsh clients:

      Download the signed macOS .pkg installer for Teleport, which includes the tctl and tsh clients:

      curl -O https://cdn.teleport.dev/teleport-${TELEPORT_VERSION?}.pkg

      In Finder double-click the pkg file to begin installation.

      danger

      Using Homebrew to install Teleport is not supported. The Teleport package in Homebrew is not maintained by Teleport and we can't guarantee its reliability or security.

  • To check that you can connect to your Teleport cluster, sign in with tsh login, then verify that you can run tctl commands using your current credentials. For example, run the following command, assigning teleport.example.com to the domain name of the Teleport Proxy Service in your cluster and email@example.com to your Teleport username:
    tsh login --proxy=teleport.example.com --user=email@example.com
    tctl status

    Cluster teleport.example.com

    Version 19.0.0-dev

    CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

    If you can connect to the cluster and run the tctl status command, you can use your current credentials to run subsequent tctl commands from your workstation. If you host your own Teleport cluster, you can also run tctl commands on the computer that hosts the Teleport Auth Service for full permissions.
  • An AWS IAM role that you wish to grant access to your Teleport cluster. This role must be granted sts:GetCallerIdentity. In this guide, this role will be named teleport-bot-role.
  • An AWS EC2 virtual machine that you wish to install Machine ID onto configured with the IAM role attached.

Step 1/9. Install tbot

This step is completed on the AWS EC2 instance.

Install tbot on the EC2 instance that will use Machine ID.

Download and install the appropriate Teleport package for your platform:

To install a Teleport Agent on your Linux server:

The easiest installation method, for Teleport versions 17.3 and above, is the cluster install script. It will use the best version, edition, and installation mode for your cluster.

  1. Assign teleport.example.com:443 to your Teleport cluster hostname and port, but not the scheme (https://).

  2. Run your cluster's install script:

    curl "https://teleport.example.com:443/scripts/install.sh" | sudo bash

On older Teleport versions:

  1. Assign edition to one of the following, depending on your Teleport edition:

    EditionValue
    Teleport Enterprise Cloudcloud
    Teleport Enterprise (Self-Hosted)enterprise
    Teleport Community Editionoss
  2. Get the version of Teleport to install. If you have automatic agent updates enabled in your cluster, query the latest Teleport version that is compatible with the updater:

    TELEPORT_DOMAIN=teleport.example.com:443
    TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')"

    Otherwise, get the version of your Teleport cluster:

    TELEPORT_DOMAIN=teleport.example.com:443
    TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/ping | jq -r '.server_version')"
  3. Install Teleport on your Linux server:

    curl https://cdn.teleport.dev/install.sh | bash -s ${TELEPORT_VERSION} edition

    The installation script detects the package manager on your Linux server and uses it to install Teleport binaries. To customize your installation, learn about the Teleport package repositories in the installation guide.

Step 2/9. Create a Bot

This step is completed on your local machine.

Next, you need to create a Bot. A Bot is a Teleport identity for a machine or group of machines. Like users, bots have a set of roles and traits which define what they can access.

Create bot.yaml:

kind: bot
version: v1
metadata:
  # name is a unique identifier for the Bot in the cluster.
  name: example
spec:
  # roles is a list of roles to grant to the Bot. Don't worry if you don't know
  # what roles you need to specify here, the Access Guides will walk you through
  # creating and assigning roles to the already created Bot.
  roles: []

Make sure you replace example with a unique, descriptive name for your Bot.

Use tctl to apply this file:

tctl create bot.yaml

Step 3/9. Create a join token

This step is completed on your local machine.

Create bot-token.yaml:

kind: token
version: v2
metadata:
  # name will be specified in the `tbot` to use this token
  name: example-bot
spec:
  roles: [Bot]
  # bot_name should match the name of the bot created earlier in this guide.
  bot_name: example
  join_method: iam
  # Restrict the AWS account and (optionally) ARN that can use this token.
  # This information can be obtained from running the
  # "aws sts get-caller-identity" command from the CLI.
  allow:
    - aws_account: "111111111111"
      aws_arn: "arn:aws:sts::111111111111:assumed-role/teleport-bot-role/i-*"

Replace:

  • 111111111111 with the ID of your AWS account.
  • teleport-bot-role with the name of the AWS IAM role you created and assigned to the EC2 instance.
  • example with the name of the bot you created in the second step.
  • i-* indicates that any instance with the specified role can use the join method. If you wish to restrict this to an individual instance, replace i-* with the full instance ID.

Use tctl to apply this file:

tctl create -f bot-token.yaml

Step 4/9. Configure tbot

This step is completed on the AWS EC2 instance.

Create /etc/tbot.yaml:

version: v2
proxy_server: example.teleport.sh:443
onboarding:
  join_method: iam
  token: example-bot
storage:
  type: memory
# outputs will be filled in during the completion of an access guide.
outputs: []

Replace:

  • example.teleport.sh:443 with the address of your Teleport Proxy Service or Auth Service. Prefer using the address of a Teleport Proxy Service instance.
  • example-bot with the name of the token you created in the second step.

Now, you must decide if you want to run tbot as a daemon or in one-shot mode.

In daemon mode, tbot runs continually, renewing the short-lived credentials for the configured outputs on a fixed interval. This is often combined with a service manager (such as systemd) in order to run tbot in the background. This is the default behaviour of tbot.

In one-shot mode, tbot generates short-lived credentials and then exits. This is useful when combining tbot with scripting (such as in CI/CD) as it allows further steps to be dependent on tbot having succeeded. It is important to note that the credentials will expire if not renewed and to ensure that the TTL for the certificates is long enough to cover the length of the CI/CD job.

Configuring tbot as a daemon

By default, tbot will run in daemon mode. However, this must then be configured as a service within the service manager on the Linux host. The service manager will start tbot on boot and ensure it is restarted if it fails. For this guide, systemd will be demonstrated but tbot should be compatible with all common alternatives.

Use tbot install systemd to generate a systemd service file:

sudo tbot install systemd \ --write \ --config /etc/tbot.yaml \ --user teleport \ --group teleport \ --anonymous-telemetry

Ensure that you replace:

  • teleport with the name of Linux user you wish to run tbot as.
  • /etc/tbot.yaml with the path to the configuration file you have created.

You can omit --write to print the systemd service file to the console instead of writing it to disk.

--anonymous-telemetry enables the submission of anonymous usage telemetry. This helps us shape the future development of tbot. You can disable this by omitting this.

Next, enable the service so that it will start on boot and then start the service:

sudo systemctl daemon-reload
sudo systemctl enable tbot
sudo systemctl start tbot

Check the service has started successfully:

sudo systemctl status tbot

Configuring tbot for one-shot mode

To use tbot in one-shot mode, modify /etc/tbot.yaml to add oneshot: true:

version: v2
oneshot: true
auth_server: ...

Now, you should test your tbot configuration. When started, several log messages will be emitted before it exits with status 0:

export TELEPORT_ANONYMOUS_TELEMETRY=1
tbot start -c /etc/tbot.yaml

TELEPORT_ANONYMOUS_TELEMETRY enables the submission of anonymous usage telemetry. This helps us shape the future development of tbot. You can disable this by omitting this.

Step 5/9. Configure outputs

You have now prepared the base configuration for tbot. At this point, it identifies itself to the Teleport cluster and renews its own credentials but does not output any credentials for other applications to use.

Follow one of the access guides to configure an output that meets your access needs.

Step 6/9. Configure Workload Identity

Next, we'll configure Workload Identity on the target resource you just created. A Workload Identity defines how a specific workload, or group of workloads, receives SPIFFE-based credentials from Teleport.

You’ll:

  • Create a Workload Identity resource that defines your workload’s SPIFFE ID.
  • Configure RBAC so your Bot can issue credentials for that identity.
  • Update your tbot instance to expose a SPIFFE Workload API endpoint that workloads can connect to for SPIFFE SVID-compatible credentials.

Before proceeding, you'll want to determine the SPIFFE ID path that your workload will use. In our example, we'll use /svc/foo. We provide more guidance on choosing a SPIFFE ID structure in the Best Practices guide.

Create a new file called workload-identity.yaml:

kind: workload_identity
version: v1
metadata:
  name: example-workload-identity
  labels:
    example: getting-started
spec:
  spiffe:
    id: /svc/foo

Replace:

  • example-workload-identity with a name that describes your use-case.
  • /svc/foo with the SPIFFE ID path you have decided on issuing.

Use tctl create -f ./workload-identity.yaml to create the Workload Identity.

You'll need to create a role that will grant access to the Workload Identity that you have just created. As with other Teleport resources, access is granted by specifying label matchers on the role that will match the labels on the resource itself.

In addition to granting access to the resource, we will also need to grant the ability to read and list the Workload Identity resource type.

Create workload-identity-issuer-role.yaml:

kind: role
version: v6
metadata:
  name: example-workload-identity-issuer
spec:
  allow:
    workload_identity_labels:
      example: ["getting-started"]
    rules:
    - resources:
      - workload_identity
      verbs:
      - list
      - read

Use tctl create -f ./workload-identity-issuer-role.yaml to create the role.

Now, use tctl bots update to add the role to the Bot. Replace example-bot with the name of the Bot you created in the deployment guide and example-workload-identity-issuer with the name of the role you just created:

tctl bots update example-bot --add-roles example-workload-identity-issuer

Step 7/9. Expose a Workload API endpoint

Configure workload-identity-api service in tbot. To issue SPIFFE credentials to workloads, tbot must expose a Workload API endpoint.You’ll configure this by adding the workload-identity-api service to your tbot configuration.

First, determine where you wish this socket to be created. In our example, we'll use /opt/machine-id/workload.sock. You may wish to choose a directory that is only accessible by the processes that will need to connect to the Workload API.

Modify your tbot configuration file to include the workload-identity-api service:

services:
- type: workload-identity-api
  listen: unix:///opt/machine-id/workload.sock
  selector:
    name: example-workload-identity

Replace:

  • /opt/machine-id/workload.sock with the path to the socket you wish to create.
  • example-workload-identity with the name of the Workload Identity resource you created earlier.

Start or restart your tbot instance to apply the new configuration.

Step 8/9. Testing the Workload API with tbot spiffe-inspect

Use tbot spiffe-inspect to verify that the Workload API endpoint is issuing SPIFFE credentials correctly. This command connects to the endpoint, requests SVIDs, and prints detailed debug information.

Before configuring your workload to use the Workload API, we recommend using this command to ensure that the Workload API is behaving as expected.

Use the spiffe-inspect command with --path to specify the path to the Workload API socket, replacing /opt/machine-id/workload.sock with the path you configured in the previous step:

tbot spiffe-inspect --path unix:///opt/machine-id/workload.sock
INFO [TBOT] Inspecting SPIFFE Workload API Endpoint unix:///opt/machine-id/workload.sock tbot/spiffe.go:31INFO [TBOT] Received X.509 SVID context from Workload API bundles_count:1 svids_count:1 tbot/spiffe.go:46SVIDS- spiffe://example.teleport.sh/svc/foo - Expiry: 2024-03-20 10:55:52 +0000 UTCTrust Bundles- example.teleport.sh

Step 9/9. Configuring your workload to use the Workload API

Now that you know that the Workload API is behaving as expected, you can configure your workload to use it. The exact steps will depend on the workload.

In cases where you have used the SPIFFE SDKs, you can configure the SPIFFE_ENDPOINT_SOCKET environment variable to point to the socket created by tbot.

Configuring Unix Workload Attestation

By default, an SVID listed under the Workload API service will be issued to any workload that connects to the Workload API. You may wish to restrict which SVIDs are issued based on certain characteristics of the workload. This is known as Workload Attestation.

When using the Unix listener, tbot supports workload attestation based on three characteristics of the workload process:

  • uid: The UID of the user that the workload process is running as.
  • gid: The primary GID of the user that the workload process is running as.
  • pid: The PID of the workload process.

Within a Workload Identity, you can configure rules based on the attributes determined via workload attestation. Each rule contains a number of tests and all tests must pass for the rule to pass. At least one rule must pass for the Workload Identity to be allowed to issue a credential.

For example, to configure a Workload Identity to be issued only to workloads that are running as the user with ID 1000 or running as a user with a primary group ID of 50:

kind: workload_identity
version: v1
metadata:
  name: example-workload-identity
  labels:
    example: getting-started
spec:
  rules:
    allow:
    - conditions:
      - attribute: workload.unix.uid
        eq:
          value: 1000
    - conditions:
      - attribute: workload.unix.gid
        eq:
          value: 50
  spiffe:
    id: /svc/foo

Next steps