Quick Start — AWS EC2
Deploy your first app on AWS with PodWarden, from zero to working WordPress in 15 minutes
Complete walkthrough for deploying a production app on a single AWS EC2 instance using PodWarden. Covers every step from instance creation to a live WordPress site.
Prerequisites
- AWS account with EC2 access
- A domain name (for ingress/HTTPS)
- DNS managed by Cloudflare, Route53, or similar (for A record creation)
Step 1: Launch EC2 Instance
Minimum specs: 2 vCPUs, 4GB RAM, 30GB SSD (t3.medium or larger)
AMI: Ubuntu 24.04 LTS
Security group — open these ports:
| Port | Protocol | Source | Purpose |
|---|---|---|---|
| 22 | TCP | Your IP | SSH access |
| 80 | TCP | 0.0.0.0/0 | HTTP (ingress) |
| 443 | TCP | 0.0.0.0/0 | HTTPS (ingress) |
| 6443 | TCP | PodWarden host IP | K8s API (if PodWarden runs elsewhere) |
| 3000 | TCP | Your IP | PodWarden UI (until reverse proxy is set up) |
| 8000 | TCP | Your IP | PodWarden API (until reverse proxy is set up) |
Ports 80 and 443 must be open for ingress to work. This is the most common reason for "502" or "connection timeout" errors after everything else is configured correctly.
Step 2: Install PodWarden
SSH into your instance and run the installer:
ssh ubuntu@<your-public-ip>
curl -fsSL https://www.podwarden.com/install.sh | bashThe installer will ask for:
- UI address: Use
http://<public-ip>:3000(not the private172.x.x.xIP) - API address: Use
http://<public-ip>:8000(not the private IP)
AWS-specific: The installer may suggest private IPs (e.g. 172.31.x.x). These are internal to the AWS VPC and unreachable from outside. Always use the public IP or a domain name.
After installation, verify:
curl http://localhost:8000/api/v1/health
# Should return: {"status":"ok",...}Open the UI at http://<public-ip>:3000 in your browser.
Step 3: Generate SSH Key
PodWarden needs an SSH key to manage your servers. In the PodWarden UI:
- Go to Settings → Secrets
- Click Generate SSH Key Pair
- Name it (e.g.,
aws-key) - Copy the public key
Install the public key on your EC2 instance:
# On the EC2 instance
echo "<paste-public-key>" | sudo tee -a /root/.ssh/authorized_keysStep 4: Add Host
In the PodWarden UI:
- Go to Hosts → Add Host
- Enter the public IP of your EC2 instance
- Set SSH user to
root(orubuntuif using sudo) - Select the SSH key you created in Step 3
- Click Probe
Wait for the probe to complete. It should detect the OS, CPU, RAM, and disk.
Step 5: Provision as Control Plane
This installs K3s (lightweight Kubernetes) on your host:
- On the host detail page, click Provision as Control Plane
- Enter a cluster name (e.g.,
my-cluster) - Select the SSH key
- Click Provision
This takes 2-5 minutes. PodWarden runs Ansible to:
- Install K3s with correct network settings
- Install Longhorn (persistent storage)
- Install Traefik (ingress controller)
- Fetch and store the kubeconfig
Do not install K3s manually or use kubectl directly. PodWarden manages the cluster configuration, kubeconfig, and component installation. Manual changes will conflict with PodWarden's state and cause failures.
After provisioning, the cluster should show as healthy with 1 node on the Clusters page.
Step 6: Connect to PodWarden Hub (Optional)
PodWarden Hub provides a catalog of 4000+ pre-configured app templates.
- Create an account at apps.podwarden.com
- Go to Account → API Keys and create a key
- In your PodWarden instance: Settings → Hub
- Enter Hub URL:
https://apps.podwarden.com - Paste your API key
- Click Save
You can now browse and import templates from the Hub catalog.
Step 7: Deploy an App
From Hub Catalog
- Go to Hub → Catalog
- Search for your app (e.g., "WordPress + MySQL")
- Click Import — this creates a local stack
- Go to Stacks → select the imported stack
- Click Deploy
- Select your cluster and namespace
- Configure environment variables (passwords will be auto-generated)
- Click Deploy
Manual Stack
- Go to Stacks → Create Stack
- Fill in image name, ports, environment variables
- For compose stacks: paste your
docker-compose.yml - Deploy as above
Step 8: Set Up Gateway Node
A gateway node is the public entry point for traffic into your cluster. On a single-node AWS setup, your EC2 instance is both the cluster and the gateway.
- Go to Hosts → select your host
- Scroll to the Gateway section
- Click Enable as Gateway Node
- PodWarden auto-detects the public IP — verify it shows your EC2 public IP
- If it shows a private IP, set the public IP manually
The gateway role tells PodWarden which host receives inbound traffic from the internet. Ingress rules are applied to the gateway's Traefik instance. See the Ingress, Gateway & DDNS guide for multi-node setups.
Gateway requirements
- Ports 80 and 443 must be open in the AWS security group
- Traefik must be running (installed automatically during provisioning)
- For multi-node clusters: only the node with ports forwarded should be the gateway
Step 9: Set Up Domain
You need a domain pointing to your EC2 public IP. Three options:
Option A: Cloudflare Domain (via API)
If your domain is on Cloudflare, create the DNS record via API:
# Set your Cloudflare API token and zone ID
CF_TOKEN="your-cloudflare-api-token"
CF_ZONE="your-zone-id"
# Create A record (DNS-only mode recommended for simplicity)
curl -X POST "https://api.cloudflare.com/client/v4/zones/$CF_ZONE/dns_records" \
-H "Authorization: Bearer $CF_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"type": "A",
"name": "wp",
"content": "<your-ec2-public-ip>",
"proxied": false,
"ttl": 1
}'Finding your Zone ID: Cloudflare dashboard → your domain → Overview → right sidebar → "Zone ID".
Creating an API token: Cloudflare dashboard → My Profile → API Tokens → Create Token → "Edit zone DNS" template.
Cloudflare proxy modes:
| Mode | DNS record | How it works | SSL setting needed |
|---|---|---|---|
| DNS-only (grey cloud) | proxied: false | Traffic goes directly to your server. Traefik handles TLS via Let's Encrypt. | None |
| Proxied (orange cloud) | proxied: true | Traffic goes through Cloudflare. Cloudflare terminates TLS and connects to your origin. | Set SSL to Full (not Strict) in Cloudflare dashboard |
If using Cloudflare proxy with SSL: Strict, Cloudflare will reject Traefik's self-signed certificate and return 502. Either use Full mode or DNS-only.
Option B: Cloudflare Domain (via Dashboard)
- Log in to Cloudflare Dashboard
- Select your domain
- Go to DNS → Records → Add Record
- Type:
A, Name:wp(or your subdomain), IPv4:<your-ec2-public-ip> - Proxy status: DNS only (recommended) or Proxied (requires Full SSL mode)
- Click Save
Option C: Any DNS Provider
Create an A record pointing your subdomain to the EC2 public IP:
| Type | Name | Value | TTL |
|---|---|---|---|
| A | wp | <your-ec2-public-ip> | 300 |
Verify DNS propagation:
dig +short wp.yourdomain.com
# Should return your EC2 public IPOption D: PodWarden Hub DDNS (Free Subdomain)
If you don't have a domain, PodWarden Hub provides free subdomains with automatic DDNS:
- In PodWarden: Settings → Hub (must be connected)
- Go to DDNS → Request Subdomain
- Choose a name (e.g.,
myapp) → getmyapp.podwarden.app - PodWarden automatically keeps the DNS record in sync with your public IP
Step 10: Create Ingress Rule
Now that your domain points to the gateway, create the ingress rule in PodWarden:
- Go to Ingress → New Rule
- Fill in:
- Domain:
wp.yourdomain.com(must match your DNS record) - Path:
/ - Backend Type: Managed (K8s workload)
- Deployment: select your WordPress deployment
- Backend Port:
80 - Gateway Host: select your EC2 host
- TLS: enabled (Let's Encrypt will auto-provision a certificate)
- Domain:
- Click Create
- Click Apply to deploy the ingress to the cluster
PodWarden creates a Kubernetes Ingress resource and Traefik picks it up automatically. If TLS is enabled, Traefik obtains a Let's Encrypt certificate (requires ports 80+443 open).
Verify
# Test HTTP (should redirect to HTTPS or serve content)
curl -sI http://wp.yourdomain.com/
# Test HTTPS
curl -sI https://wp.yourdomain.com/If you see HTTP/1.1 302 Found or HTTP/1.1 200 OK, your app is live!
Your app should now be accessible at https://wp.yourdomain.com.
Using MCP Tools (for LLM Agents)
All steps above can be performed programmatically via PodWarden's MCP tools. If you're an LLM agent connected via MCP, use these tools instead of kubectl or direct API calls.
Never use kubectl apply, kubectl create, or kubectl delete on PodWarden-managed clusters. This creates resources PodWarden doesn't know about and breaks the system. Use only the MCP tools listed below.
Full deployment flow via MCP
# 1. Generate SSH key
pw__generate_ssh_key_pair(name="aws-key")
# 2. Add host (use PUBLIC IP, not private)
pw__add_host(
connect_address="<public-ip>",
ssh_user="root",
ssh_key_name="aws-key"
)
# 3. Probe host
pw__probe_host(host_id="<host-id>")
# 4. Provision as control plane (creates cluster automatically)
pw__provision_as_control_plane(
host_id="<host-id>",
cluster_name="my-cluster",
ssh_key_name="aws-key"
)
# Wait for provisioning to complete — check host status
# 5. Enable gateway on the host
pw__update_host(host_id="<host-id>", is_gateway=true)
# 6. Import template from Hub catalog
pw__import_hub_template(slug="wordpress-mysql")
# 7. Create deployment
pw__create_deployment(
cluster_id="<cluster-id>",
workload_definition_id="<stack-id>",
namespace="wordpress"
)
# 8. Deploy
pw__deploy_workload(deployment_id="<deployment-id>")
# 9. Create ingress rule
pw__create_ingress_rule(
domain="wp.example.com",
workload_assignment_id="<deployment-id>",
backend_port=80,
tls_enabled=true
)
# 10. Apply ingress
pw__apply_ingress_rule(rule_id="<rule-id>")What NOT to do
| Don't | Do instead |
|---|---|
kubectl apply -f deployment.yaml | pw__create_deployment + pw__deploy_workload |
kubectl create namespace wordpress | PodWarden creates namespaces automatically during deployment |
kubectl create secret ... | Use pw__create_deployment with env_values — secrets are managed per-deployment |
kubectl apply -f ingress.yaml | pw__create_ingress_rule + pw__apply_ingress_rule |
helm install ... | Import from Hub catalog or create a compose stack |
Manually edit /etc/rancher/k3s/config.yaml | pw__provision_as_control_plane handles all k3s configuration |
POST /api/v1/clusters with manual kubeconfig | pw__provision_as_control_plane — never create clusters directly |
Safe kubectl commands
These are read-only and safe to use for debugging:
kubectl get pods -n <namespace>kubectl logs <pod-name> -n <namespace>kubectl describe pod <pod-name> -n <namespace>kubectl get events -n <namespace>
Troubleshooting
"Command timed out" on cluster operations
PodWarden can't reach the K8s API. Check:
- Is K3s running?
sudo systemctl status k3s - Is the API server URL correct? Go to Clusters → your cluster → check
effective_api_server_url - If it shows a private IP (172.x.x.x), set the API server override to the public IP
AWS NAT loopback issue (PodWarden and K3s on the same instance): If you run both PodWarden and K3s on the same EC2 instance, you may encounter a NAT loopback restriction. AWS does not allow a host to connect to its own public IP from within the same VPC. If PodWarden stores https://<public-ip>:6443 as the cluster API URL, all kubectl calls will time out with "Command timed out after 8s".
Fix: Update the cluster's API server URL to https://127.0.0.1:6443:
PATCH /api/v1/clusters/<cluster-id>
{"api_server_url": "https://127.0.0.1:6443"}Or via MCP: pw__update_cluster(cluster_id="...", api_server_url="https://127.0.0.1:6443")
After updating, PodWarden re-resolves the effective API URL using the loopback address, which is always reachable from within the same host.
K3s crash-looping
Check logs: sudo journalctl -u k3s --no-pager -n 50
Common causes:
- Duplicate node-ip: Check if both
/etc/rancher/k3s/config.yamland the service file set--node-ip. Remove duplicates. - RBAC bootstrap timeout: Stop k3s, wait 10 seconds, start again. It needs uninterrupted time to initialize.
Ingress returns 502 or connection timeout
- Security group: Ensure ports 80 and 443 are open for
0.0.0.0/0 - Traefik not running:
sudo k3s kubectl get pods -n kube-system | grep traefik - DNS not pointing to correct IP:
dig +short your.domain
Pods stuck in Pending
- No storage: Check if Longhorn is installed:
sudo k3s kubectl get pods -n longhorn-system - PVC not binding: Check storage class:
sudo k3s kubectl get sc
Important: Don't Bypass PodWarden
PodWarden manages your cluster's state — kubeconfig, deployments, ingress rules, DNS, and storage. If you use kubectl directly to create or modify resources, PodWarden won't know about them and things will break.
Always use PodWarden's API, UI, or MCP tools to:
- Create/modify deployments
- Set up ingress rules
- Manage namespaces
- Configure storage
If you need to debug, kubectl get and kubectl logs are safe. But kubectl apply, kubectl create, and kubectl delete on PodWarden-managed resources will cause state drift.