Ansible + Tailscale = 🎉

Introduction

Before we jump into the concrete setup, here’s a short explanation of why this solution is useful. Originally, remote machines were configured manually — you logged in, ran commands, and called it a day. As the number of hosts grew and tasks became repetitive, people started writing ad‑hoc shell scripts to automate steps. That helped in the short term but led to many scattered scripts with inconsistent conventions, which made maintenance difficult.

To solve that, teams adopted configuration‑management tools such as Ansible, Puppet, Chef, and Salt. Containerization and orchestration (Docker, then Kubernetes) added higher‑level primitives for running and scaling distributed applications. Kubernetes is powerful for large, dynamic clusters, but for small deployments it often adds unnecessary complexity and operational overhead.

For a single machine or a handful of hosts, a configuration‑management approach with Ansible is usually simpler, more maintainable, and easier to reason about. In this post I’ll show how to use Ansible to manage Docker containers on remote hosts that live in private networks — and how to access them securely using Tailscale so you don’t need to open public ports or manage SSH keys manually.

The idea is simple: if a machine is joined to our Tailscale network and SSH access is enabled, Ansible can connect to it because Ansible uses SSH. To run Ansible automatically whenever we change our repository, we’ll add a GitHub Actions workflow that runs on push. For the workflow runner to reach machines on Tailscale, we’ll temporarily add the runner to our Tailscale network using Tailscale’s GitHub Action.

This demo configures two AWS VMs to:

  • Install common utilities (tmux, vim, jq, etc.)
  • Install Docker using Ansible roles/collections
  • Run two Nginx containers on one VM and two Traefik containers on the other
  • Keep both VMs inaccessible from the public internet — reachable only via Tailscale

Prerequisites and Setup

  • Install Tailscale on each target host.
  • Join each host to your tailnet with SSH enabled (--ssh).
  • Ensure the hostnames are resolvable via Tailscale MagicDNS.
#!/usr/bin/env bash

apt-get update -y
apt-get install -y curl

curl -fsSL https://tailscale.com/install.sh | sh
tailscale up --authkey TAILSCALE_AUTH_KEY --hostname aws-worker-1 --ssh

Inventory

# ansible/inventory.ini
[workers]
a1 ansible_host=aws-worker-1
a2 ansible_host=aws-worker-2
  • Defines group workers containing hosts a1 and a2.
  • ansible_host points to Tailscale MagicDNS names (IPs also work).

Ansible Requirements

# ansible/requirements.yml
roles:
  - name: geerlingguy.docker
    version: 7.4.1

collections:
  - name: community.docker
    version: ">=3.0.0"
  • geerlingguy.docker role: installs Docker.
  • community.docker collection: modules for managing containers.

Playbook

# ansible/playbook.yml
- name: Install dependencies
  hosts: workers
  become: yes
  roles:
    - base
    - geerlingguy.docker

- name: Install nginx on AWS Worker 1
  hosts: a1
  become: yes
  roles:
    - nginx

- name: Install traefik on AWS Worker 2
  hosts: a2
  become: yes
  roles:
    - traefik
  • Installs common utilities and Docker on all workers (base and geerlingguy.docker roles).
  • Deploys Nginx on a1 and Traefik on a2.

Base Role

# ansible/roles/base/tasks/main.yml
- name: Install common utilities
  apt:
    name:
      - tmux
      - vim
      - jq
    state: present
  • Installs a small set of common CLI tools across all hosts.

Nginx Role

  • Tasks are split to keep responsibilities clear and files small.
# ansible/roles/nginx/tasks/main.yml
- name: Nginx Alpine
  import_tasks: nginx_alpine.yml

- name: Nginx Latest
  import_tasks: nginx_latest.yml
# ansible/roles/nginx/tasks/nginx_alpine.yml
- name: Start nginx:alpine container
  community.docker.docker_container:
    name: nginx_alpine
    image: nginx:alpine
    state: started
    ports:
      - "8080:80"
    restart_policy: no
# ansible/roles/nginx/tasks/nginx_latest.yml
- name: Start nginx:latest container
  community.docker.docker_container:
    name: nginx_latest
    image: nginx:latest
    state: started
    ports:
      - "8081:80"
    restart_policy: no

CI Workflow

To run Ansible from CI, the workflow does four things:

  1. Check out the repository.
  2. Temporarily add the runner to your Tailscale network using the Tailscale GitHub Action.
  3. Install Ansible roles.
  4. Run the playbook against the Tailscale-accessible hosts.
# .github/workflows/ansible-run-playbook.yml
---
name: Ansible Run Playbook

on:
  push:
    branches:
      - main
    paths:
      - 'ansible/**'

jobs:
  run-playbook:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout
        uses: actions/checkout@v5

      - name: Set up Tailscale
        uses: tailscale/github-action@v4
        with:
          oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
          oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
          tags: tag:ci

      - name: Install Ansible roles
        run: ansible-galaxy install -r ansible/requirements.yml

      - name: Run Ansible playbook
        uses: dawidd6/action-ansible-playbook@v5
        with:
          playbook: ansible/playbook.yml
          options: |
            --inventory ansible/inventory.ini
            --user ubuntu
            --verbose

Conclusion

Use Tailscale to avoid exposing SSH, and run Ansible from a temporary Tailscale-connected GitHub Actions runner to manage private hosts safely.

  • You can find the source code for this project here.
  • A live demo is available here.

Comments