Before we jump into the concrete setup, here’s a short explanation of why this solution is useful. Originally, remote machines were configured manually — you logged in, ran commands, and called it a day. As the number of hosts grew and tasks became repetitive, people started writing ad‑hoc shell scripts to automate steps. That helped in the short term but led to many scattered scripts with inconsistent conventions, which made maintenance difficult.
To solve that, teams adopted configuration‑management tools such as Ansible, Puppet, Chef, and Salt. Containerization and orchestration (Docker, then Kubernetes) added higher‑level primitives for running and scaling distributed applications. Kubernetes is powerful for large, dynamic clusters, but for small deployments it often adds unnecessary complexity and operational overhead.
For a single machine or a handful of hosts, a configuration‑management approach with Ansible is usually simpler, more maintainable, and easier to reason about. In this post I’ll show how to use Ansible to manage Docker containers on remote hosts that live in private networks — and how to access them securely using Tailscale so you don’t need to open public ports or manage SSH keys manually.
The idea is simple: if a machine is joined to our Tailscale network and SSH access is enabled, Ansible can connect to it because Ansible uses SSH. To run Ansible automatically whenever we change our repository, we’ll add a GitHub Actions workflow that runs on push. For the workflow runner to reach machines on Tailscale, we’ll temporarily add the runner to our Tailscale network using Tailscale’s GitHub Action.
This demo configures two AWS VMs to:
#!/usr/bin/env bash
apt-get update -y
apt-get install -y curl
curl -fsSL https://tailscale.com/install.sh | sh
tailscale up --authkey TAILSCALE_AUTH_KEY --hostname aws-worker-1 --ssh
# ansible/inventory.ini
[workers]
a1 ansible_host=aws-worker-1
a2 ansible_host=aws-worker-2
# ansible/requirements.yml
roles:
- name: geerlingguy.docker
version: 7.4.1
collections:
- name: community.docker
version: ">=3.0.0"
# ansible/playbook.yml
- name: Install dependencies
hosts: workers
become: yes
roles:
- base
- geerlingguy.docker
- name: Install nginx on AWS Worker 1
hosts: a1
become: yes
roles:
- nginx
- name: Install traefik on AWS Worker 2
hosts: a2
become: yes
roles:
- traefik
# ansible/roles/base/tasks/main.yml
- name: Install common utilities
apt:
name:
- tmux
- vim
- jq
state: present
# ansible/roles/nginx/tasks/main.yml
- name: Nginx Alpine
import_tasks: nginx_alpine.yml
- name: Nginx Latest
import_tasks: nginx_latest.yml
# ansible/roles/nginx/tasks/nginx_alpine.yml
- name: Start nginx:alpine container
community.docker.docker_container:
name: nginx_alpine
image: nginx:alpine
state: started
ports:
- "8080:80"
restart_policy: no
# ansible/roles/nginx/tasks/nginx_latest.yml
- name: Start nginx:latest container
community.docker.docker_container:
name: nginx_latest
image: nginx:latest
state: started
ports:
- "8081:80"
restart_policy: no
To run Ansible from CI, the workflow does four things:
# .github/workflows/ansible-run-playbook.yml
---
name: Ansible Run Playbook
on:
push:
branches:
- main
paths:
- 'ansible/**'
jobs:
run-playbook:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout
uses: actions/checkout@v5
- name: Set up Tailscale
uses: tailscale/github-action@v4
with:
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
tags: tag:ci
- name: Install Ansible roles
run: ansible-galaxy install -r ansible/requirements.yml
- name: Run Ansible playbook
uses: dawidd6/action-ansible-playbook@v5
with:
playbook: ansible/playbook.yml
options: |
--inventory ansible/inventory.ini
--user ubuntu
--verbose
Use Tailscale to avoid exposing SSH, and run Ansible from a temporary Tailscale-connected GitHub Actions runner to manage private hosts safely.