Deploying Huntress EDR on Linux Across Your Client Base via RMM
Huntress provides a one-liner to install their Linux EDR agent. It works fine when you run it manually on a single machine you control. It does not work reliably when you fire it across dozens of client endpoints via an RMM, against distros you did not pick, behind firewalls configured however the client configured them, on machines you have never touched.
The gap between "one-liner that installs the agent" and "deployment that succeeds consistently at scale" is a wrapper script. This post is about that wrapper, the decisions behind it, and three specific bugs that would have silently broken deployments without the fixes.
What the Script Has to Do
The Huntress one-liner is the last thing the script runs, not the first. Before that, the script needs to know the deployment is viable: right architecture, right OS, enough hardware, network path to Huntress endpoints, and a valid org key in hand. A deployment that fails in the middle (after the installer has partially run) is harder to clean up than one that fails fast with a clear reason.
Supported distributions:
- Ubuntu 22.04, 24.04, 25.04
- Debian 11, 12, 13
- RHEL 8.6+, 9.x, 10.x
- CentOS Stream 9, 10
- SUSE Linux 12.x, 15.x
- Fedora 41, 42
Explicitly unsupported (script detects and exits immediately):
- WSL (Windows Subsystem for Linux)
- Containers (Docker, Kubernetes)
Huntress documents both exclusions. There is no value in letting the installer run and fail in an unsupported environment when you can check first and produce a clear error.
Pre-flight Checks
The script runs these checks in order before touching anything else:
- Architecture (x86_64 only):
uname -mmust returnx86_64. - WSL detection: checks for
/proc/sys/fs/binfmt_misc/WSLInterop. If it exists, the machine is running under WSL and the script exits. - Container detection: checks for
/.dockerenvand inspects cgroup markers. If either signals a container runtime, the script exits. - Kernel version: parses
uname -rand verifies the kernel is at or above 5.14.50. Huntress documents this as the minimum for full functionality. - OS detection: reads
/etc/os-releaseand verifies the distro and version are on the supported list above. - Hardware minimums: at least 2 CPU cores (
nproc), at least 2 GB RAM (/proc/meminfo), at least 2 GB available disk on the filesystem where the agent will land. - Network connectivity: outbound HTTPS to
huntress.io,huntresscdn.com,bugsnag.com, ands3.amazonaws.com, all on port 443. This is where the first bug lives.
Every check that fails writes a [FAIL] line to stdout with a human-readable reason and exits with a non-zero code. The RMM captures stdout per job, so a failed deployment has a specific, actionable reason attached to it rather than just a non-zero exit.
Bug 1: The /dev/tcp Hang on DROP-Rule Firewalls
The initial network check used bash's built-in /dev/tcp to probe each Huntress endpoint:
timeout 5 bash -c "echo >/dev/tcp/huntresscdn.com/443"
This works when the firewall sends REJECT -- the client gets an immediate RST and the connection attempt fails fast. It does not work when the firewall uses DROP. DROP-policy firewalls silently discard packets. The TCP stack on the sending side retransmits according to its retry schedule and the timeout wrapper does not fire until the full timeout window elapses. With four endpoints to check and a 5-second timeout per check, a machine behind DROP-style rules could hold the check loop open for 20+ minutes. That is long enough to blow out RMM task timeouts and make the script appear completely broken -- no output, job just times out.
The fix wraps each probe in a function and handles both outcomes explicitly:
check_endpoint() {
local host="$1"
local port="${2:-443}"
if timeout 5 bash -c "echo >/dev/tcp/$host/$port" 2>/dev/null; then
echo "[PASS] Reachable: $host:$port"
return 0
else
echo "[FAIL] Cannot reach $host:$port — verify firewall rules allow outbound HTTPS"
return 1
fi
}
The actual timeout behavior did not change -- the timeout wrapper still fires after 5 seconds whether the firewall uses REJECT or DROP. What changed is that an unreachable endpoint now produces an immediate [FAIL] line and returns, rather than the entire check loop stalling. If all four endpoints are unreachable behind a DROP firewall, the script runs four 5-second probes sequentially, writes four [FAIL] lines, and exits in 20 seconds total. That is a job that finishes and tells you what is wrong, not a job that disappears.
/dev/tcp is a bash extension. It does not exist in dash, which is the default /bin/sh on Debian and Ubuntu. If the RMM invokes the script with sh, the /dev/tcp probes will fail immediately regardless of network state. The script must be dispatched explicitly as bash, not sh. Set this in the RMM script execution settings, not in a shebang (shebang is irrelevant when the RMM calls an interpreter directly).
Bug 2: ANSI Color Codes in RMM Output
The script uses ANSI color codes so that terminal output is readable when you run it manually. [PASS] lines in green, [FAIL] lines in red, [WARN] in yellow. This is useful during development and testing.
RMM platforms capture stdout as plain text and display it in a log pane. They do not interpret ANSI escape codes. When colors are unconditionally written, every status line renders as literal garbage in the RMM log:
\033[0;31m[FAIL]\033[0m Cannot reach huntresscdn.com:443
Readable on a terminal. Unreadable everywhere the script actually runs in production.
The fix is to detect whether stdout is connected to a TTY at runtime. If it is, define the color variables. If it is not (piped, redirected, or captured by the RMM), set them to empty strings:
if [ -t 1 ]; then
GREEN='\033[0;32m'; YELLOW='\033[0;33m'; RED='\033[0;31m'; NC='\033[0m'
else
GREEN=''; YELLOW=''; RED=''; NC=''
fi
Every status line in the script uses these variables. When the script runs under the RMM, all four variables are empty strings and the output is clean plain text. When you run it in a terminal for testing, colors work normally. One block at the top of the script, no conditional logic scattered through the output calls.
Bug 3: RMM Token Substitution for the Org Key
Each client organization has a unique Huntress org key. In ConnectWise RMM, the right place to store this is a site-level custom field named Organization_Key (Administration > Custom Fields, scope: Site, one value per client site). ConnectWise RMM supports @VariableName@ tokens in script bodies and substitutes the matching custom field value before dispatching the script to the endpoint.
The script defines the variable at the top, clearly marked:
# ConnectWise RMM replaces this token with the site-level custom field value
RMM_ORG_KEY="@Organization_Key@"
The problem is what happens when substitution does not occur. If the custom field is not configured for a given client site, the platform sends the script with the literal token string still in place. The installer then receives @Organization_Key@ as the org key, registers the agent against nothing useful, and the deployment appears to succeed (the installer exits 0, the RMM job shows green) but the agent is enrolled under a garbage org key.
The detection logic explicitly distinguishes "substitution happened" from "literal token is still in the script":
if [[ -n "$ORG_KEY_OVERRIDE" ]]; then
# Manual override via -o flag takes precedence
ORG_KEY="$ORG_KEY_OVERRIDE"
elif [[ "$RMM_ORG_KEY" != "@Organization_Key@" && -n "$RMM_ORG_KEY" ]]; then
# RMM successfully injected a value
ORG_KEY="$RMM_ORG_KEY"
else
echo "[FAIL] Org key not set. In RMM, define the Organization_Key script variable. For manual runs, use -o <org_key>."
exit 1
fi
If RMM_ORG_KEY is still the literal placeholder string, the script exits with a clear failure message before touching the installer. No silent misregistration.
The script also accepts -o <org_key> as a command-line flag. This is important for testing the wrapper manually outside the RMM without needing to mock the substitution mechanism. Running it directly to validate pre-flight logic against a specific machine does not require faking RMM variable injection.
The Install Step
Once all pre-flight checks pass and the org key is validated, the install step is straightforward:
INSTALLER_URL="https://huntresscdn.com/huntress-installers/linux/huntress-linux-install.sh"
INSTALLER_TMP=$(mktemp /tmp/huntress-install-XXXXXX.sh)
curl -fsSL "$INSTALLER_URL" -o "$INSTALLER_TMP" || {
echo "[FAIL] Could not download Huntress installer"
exit 1
}
chmod +x "$INSTALLER_TMP"
bash "$INSTALLER_TMP" -a "$ACCOUNT_KEY" -o "$ORG_KEY"
Pulling the installer at runtime keeps the wrapper lightweight. You do not maintain a versioned copy of the installer in your RMM script library, and you always run the current installer without needing to update the wrapper when Huntress ships a new agent version. The tradeoff is a dependency on huntresscdn.com being reachable at install time (which the pre-flight network check already validates).
The wrapper exits with whatever code the installer returns. The RMM job result reflects the actual installer outcome.
Where the Three Bugs Converge
All three bugs share a common pattern: they produced failures that were invisible or misleading in the RMM view. The /dev/tcp hang produced jobs that timed out with no output. The ANSI codes produced logs that were technically present but unreadable. The token substitution issue produced jobs that appeared to succeed while silently enrolling agents incorrectly.
None of them would have surfaced from running the installer manually on a test machine. They only appear when the script runs at scale under RMM execution conditions (captured stdout, no TTY, substitution tokens in play, firewalls you did not configure). Testing manually and calling it done is how you ship a wrapper that fails quietly in production.
The pre-flight checks are not defensive programming for its own sake. Each one maps to a class of endpoints you will encounter across a client base: machines on the wrong architecture, kernel too old to update right now, firewall rules configured without considering EDR requirements. The checks surface those machines immediately with specific reasons, rather than letting the installer run and produce cryptic failures or partial enrollments.
Build the wrapper. Test it under RMM execution conditions, not just your terminal. The bugs that matter are the ones you cannot see from where you are testing.
Full Script
#!/usr/bin/env bash
# =============================================================================
# deploy-huntress-linux.sh
# Huntress EDR Linux Agent — Multi-Distro Deployment Wrapper
#
# PURPOSE:
# Wraps the official Huntress Linux installer with pre-flight validation,
# RMM-compatible structured output, and reliable org key injection.
# Designed to run via ConnectWise RMM (or any RMM that supports bash
# script execution) as well as directly from the terminal.
#
# SUPPORTED DISTRIBUTIONS:
# Ubuntu: 22.04, 24.04, 25.04
# Debian: 11, 12, 13
# RHEL: 8.6+, 9.x, 10.x
# CentOS Stream:9, 10
# SUSE Linux: 12.x, 15.x
# Fedora: 41, 42
#
# EXPLICITLY NOT SUPPORTED (script will exit cleanly):
# - Windows Subsystem for Linux (WSL)
# - Containers (Docker, Kubernetes, LXC)
#
# USAGE:
# # Via RMM: the @Organization_Key@ token is replaced by ConnectWise RMM
# # using a site-level custom field named Organization_Key before the script
# # is dispatched to the endpoint. Set this custom field per client site in
# # ConnectWise RMM (Administration > Custom Fields, scope: Site).
#
# # Manual / direct execution — pass org key with -o flag:
# sudo bash deploy-huntress-linux.sh -o "Your Client Organization Name"
#
# RMM TOKEN SUBSTITUTION:
# ConnectWise RMM supports @VariableName@ tokens in script bodies and
# substitutes them before dispatching the script to the endpoint. The value
# comes from the matching site-level custom field for the target site.
# The ACCOUNT_KEY below is your Huntress account-level key (shared across
# all deployments). The ORG_KEY is per-client and must be set as a custom
# field value per site in ConnectWise RMM.
#
# EXIT CODES:
# 0 — Success: Huntress agent installed and running
# 1 — Pre-flight failure (unsupported OS, missing org key, failed check)
# 2 — Installation failure (Huntress installer returned non-zero)
# 3 — Post-install verification failure (service not running after install)
#
# REQUIREMENTS:
# - Must be run as root (sudo)
# - curl must be installed
# - Outbound HTTPS (port 443) to:
# huntress.io
# huntresscdn.com
# bugsnag.com
# s3.amazonaws.com
# =============================================================================
set -euo pipefail
# ─── RMM Token Substitution ───────────────────────────────────────────────────
# Replace YOUR_HUNTRESS_ACCOUNT_KEY with your actual Huntress account key.
# This is the same key for all clients — it identifies your MSP account.
# If deploying via RMM, you can also inject this via a platform variable.
ACCOUNT_KEY="YOUR_HUNTRESS_ACCOUNT_KEY"
# ConnectWise RMM replaces this token with the value of the site-level custom
# field named Organization_Key before dispatching the script to the endpoint.
# If you see the literal string "@Organization_Key@" at runtime, either the
# custom field is not configured for this site or you are running the script
# manually (use the -o flag instead).
RMM_ORG_KEY="@Organization_Key@"
# Huntress installer URL — always pulls the latest version
INSTALLER_URL="https://huntresscdn.com/huntress-installers/linux/huntress-linux-install.sh"
# ─── Output Helpers ───────────────────────────────────────────────────────────
# RMM platforms capture stdout as plain text. They do not interpret ANSI escape
# sequences — codes like \033[32m render as literal garbage in the RMM log pane.
#
# Solution: detect whether stdout is a TTY (interactive terminal). If yes,
# use color codes for readability. If no (piped output or RMM execution),
# use empty strings so no escape codes appear in the captured output.
#
# Note: 'dash' (the default /bin/sh on Debian/Ubuntu) does not support
# /dev/tcp. This script must be explicitly invoked with bash, not sh.
if [ -t 1 ]; then
C_GREEN='\033[0;32m'
C_YELLOW='\033[0;33m'
C_RED='\033[0;31m'
C_CYAN='\033[0;36m'
C_BOLD='\033[1m'
C_RESET='\033[0m'
else
C_GREEN=''
C_YELLOW=''
C_RED=''
C_CYAN=''
C_BOLD=''
C_RESET=''
fi
print_header() {
echo ""
echo "━━━ $1 ━━━"
}
pass() { echo -e "${C_GREEN}[PASS]${C_RESET} $*"; }
warn() { echo -e "${C_YELLOW}[WARN]${C_RESET} $*"; }
fail() { echo -e "${C_RED}[FAIL]${C_RESET} $*"; }
info() { echo -e "[INFO] $*"; }
die() {
fail "$*"
echo ""
echo "Deployment aborted."
exit 1
}
# ─── Argument Parsing ─────────────────────────────────────────────────────────
ORG_KEY_OVERRIDE=""
usage() {
echo "Usage: sudo bash $(basename "$0") [-o <org_key>] [-h]"
echo ""
echo " -o <org_key> Organization key (overrides RMM token substitution)"
echo " -h Show this help"
echo ""
echo "In ConnectWise RMM: set Organization_Key as a site-level custom field."
echo "Manual: pass the org key with -o \"Your Client Name\"."
exit 0
}
while getopts "o:h" opt; do
case $opt in
o) ORG_KEY_OVERRIDE="$OPTARG" ;;
h) usage ;;
*) usage ;;
esac
done
# ─── Org Key Resolution ───────────────────────────────────────────────────────
# Priority: explicit -o override → RMM token substitution → error
#
# The RMM token detection check (@Organization_Key@) distinguishes between
# "the RMM injected a real value" and "the literal placeholder is still there
# because the script is running outside an RMM or the variable wasn't set."
if [[ -n "$ORG_KEY_OVERRIDE" ]]; then
ORG_KEY="$ORG_KEY_OVERRIDE"
info "Org key source: -o flag (manual override)"
elif [[ "$RMM_ORG_KEY" != "@Organization_Key@" && -n "$RMM_ORG_KEY" ]]; then
ORG_KEY="$RMM_ORG_KEY"
info "Org key source: RMM pre-defined variable"
else
die "Organization key not set. In ConnectWise RMM, set 'Organization_Key' as a site-level custom field for this site. For manual runs, use -o \"Your Org Name\"."
fi
# Validate account key hasn't been left as placeholder
if [[ "$ACCOUNT_KEY" == "YOUR_HUNTRESS_ACCOUNT_KEY" ]]; then
die "ACCOUNT_KEY has not been set. Edit this script and replace YOUR_HUNTRESS_ACCOUNT_KEY with your Huntress account key."
fi
# ─── Root Check ───────────────────────────────────────────────────────────────
if [[ $EUID -ne 0 ]]; then
die "This script must be run as root. Use: sudo bash $(basename "$0")"
fi
# ─── Pre-Flight Checks ────────────────────────────────────────────────────────
print_header "Pre-flight Checks"
# --- Architecture ---
ARCH=$(uname -m)
if [[ "$ARCH" == "x86_64" ]]; then
pass "Architecture: $ARCH"
else
die "Unsupported architecture: $ARCH (Huntress Linux agent requires x86_64)"
fi
# --- WSL Detection ---
# Huntress explicitly does not support WSL. Partial installs on WSL can
# create confusing states. Detect by checking for the WSL interop binary.
if [[ -e /proc/sys/fs/binfmt_misc/WSLInterop ]] || grep -qi 'microsoft' /proc/version 2>/dev/null; then
die "Windows Subsystem for Linux (WSL) detected. Huntress is not supported on WSL."
fi
pass "Not running under WSL"
# --- Container Detection ---
# Huntress is not supported inside containers. Check multiple indicators:
# - /.dockerenv presence (Docker)
# - cgroup path containing 'docker', 'kubepods', or 'lxc'
# - /run/systemd/container marker (systemd-nspawn)
IS_CONTAINER=false
[[ -f /.dockerenv ]] && IS_CONTAINER=true
grep -qE 'docker|kubepods|lxc' /proc/1/cgroup 2>/dev/null && IS_CONTAINER=true
[[ -f /run/systemd/container ]] && IS_CONTAINER=true
if $IS_CONTAINER; then
die "Container environment detected (Docker/Kubernetes/LXC). Huntress is not supported in containers."
fi
pass "Not running inside a container"
# --- Kernel Version ---
# Huntress Linux agent requires kernel >= 5.14.50 for full eBPF-based
# behavioral detection. Earlier kernels may partially work but are unsupported.
KERNEL_VERSION=$(uname -r)
KERNEL_MAJOR=$(echo "$KERNEL_VERSION" | cut -d. -f1)
KERNEL_MINOR=$(echo "$KERNEL_VERSION" | cut -d. -f2)
KERNEL_PATCH=$(echo "$KERNEL_VERSION" | cut -d. -f3 | grep -oE '^[0-9]+')
KERNEL_OK=false
if [[ $KERNEL_MAJOR -gt 5 ]]; then
KERNEL_OK=true
elif [[ $KERNEL_MAJOR -eq 5 && $KERNEL_MINOR -gt 14 ]]; then
KERNEL_OK=true
elif [[ $KERNEL_MAJOR -eq 5 && $KERNEL_MINOR -eq 14 && ${KERNEL_PATCH:-0} -ge 50 ]]; then
KERNEL_OK=true
fi
if $KERNEL_OK; then
pass "Kernel version: $KERNEL_VERSION (>= 5.14.50 required)"
else
die "Kernel version $KERNEL_VERSION does not meet the minimum requirement (5.14.50). Update the kernel and retry."
fi
# --- OS Distribution and Version ---
if [[ ! -f /etc/os-release ]]; then
die "/etc/os-release not found — cannot determine OS distribution."
fi
. /etc/os-release
OS_ID="${ID:-unknown}"
OS_VERSION="${VERSION_ID:-unknown}"
OS_PRETTY="${PRETTY_NAME:-$OS_ID $OS_VERSION}"
# Validate against the supported distro/version matrix
OS_SUPPORTED=false
case "$OS_ID" in
ubuntu)
[[ "$OS_VERSION" =~ ^(22\.04|24\.04|25\.04)$ ]] && OS_SUPPORTED=true ;;
debian)
[[ "$OS_VERSION" =~ ^(11|12|13)$ ]] && OS_SUPPORTED=true ;;
rhel | "red hat enterprise linux" | redhatenterpriselinux)
# RHEL version_id is like "8.6" or "9.4"
RHEL_MAJOR=$(echo "$OS_VERSION" | cut -d. -f1)
RHEL_MINOR=$(echo "$OS_VERSION" | cut -d. -f2)
if [[ $RHEL_MAJOR -eq 8 && ${RHEL_MINOR:-0} -ge 6 ]] || [[ $RHEL_MAJOR -ge 9 ]]; then
OS_SUPPORTED=true
fi
;;
centos)
# CentOS Stream 9 and 10
[[ "$OS_VERSION" =~ ^(9|10)$ ]] && OS_SUPPORTED=true ;;
sles | suse | opensuse-leap)
SUSE_MAJOR=$(echo "$OS_VERSION" | cut -d. -f1)
[[ $SUSE_MAJOR -eq 12 || $SUSE_MAJOR -eq 15 ]] && OS_SUPPORTED=true ;;
fedora)
[[ "$OS_VERSION" =~ ^(41|42)$ ]] && OS_SUPPORTED=true ;;
*)
OS_SUPPORTED=false ;;
esac
if $OS_SUPPORTED; then
pass "OS: $OS_PRETTY — supported"
else
die "OS not supported: $OS_PRETTY. Supported: Ubuntu 22.04/24.04/25.04, Debian 11/12/13, RHEL 8.6+/9.x/10.x, CentOS Stream 9/10, SUSE 12.x/15.x, Fedora 41/42."
fi
# --- Hardware Requirements ---
print_header "Hardware Requirements"
# CPU cores (Huntress requires >= 2)
CPU_CORES=$(nproc --all 2>/dev/null || grep -c ^processor /proc/cpuinfo)
if [[ $CPU_CORES -ge 2 ]]; then
pass "CPU cores: $CPU_CORES (>= 2 required)"
else
die "Insufficient CPU cores: $CPU_CORES (Huntress requires at least 2)"
fi
# RAM in MB (Huntress requires >= 2 GB)
RAM_KB=$(grep MemTotal /proc/meminfo | awk '{print $2}')
RAM_MB=$((RAM_KB / 1024))
if [[ $RAM_MB -ge 2048 ]]; then
pass "RAM: ${RAM_MB} MB (>= 2048 MB required)"
else
die "Insufficient RAM: ${RAM_MB} MB (Huntress requires at least 2 GB)"
fi
# Available disk space in MB (Huntress requires >= 2 GB free)
DISK_AVAIL_KB=$(df -k / | awk 'NR==2 {print $4}')
DISK_AVAIL_MB=$((DISK_AVAIL_KB / 1024))
if [[ $DISK_AVAIL_MB -ge 2048 ]]; then
pass "Available disk space: ${DISK_AVAIL_MB} MB (>= 2048 MB required)"
else
die "Insufficient disk space: ${DISK_AVAIL_MB} MB free (Huntress requires at least 2 GB)"
fi
# ─── Network Connectivity ─────────────────────────────────────────────────────
print_header "Network Connectivity"
# Check outbound HTTPS connectivity to all required Huntress endpoints.
#
# IMPORTANT — The /dev/tcp hang problem:
# Naive connectivity checks using `/dev/tcp` with `timeout` can hang for
# the full timeout duration on endpoints behind firewalls using DROP rules
# (silent packet discard) rather than REJECT (immediate RST/ICMP refuse).
#
# With DROP rules, TCP waits through retransmission intervals before giving
# up. With 4 endpoints at 5 seconds each, a host behind DROP rules can
# hang for 20+ minutes — enough to blow out RMM task timeouts.
#
# The fix: use `timeout` wrapping `bash -c "echo >/dev/tcp/..."` so the
# entire bash subprocess (and its /dev/tcp file descriptor) is killed on
# timeout, not just the outer process. This ensures clean 5-second failure
# on any host with DROP-rule firewalls.
#
# Note: /dev/tcp is a bash extension. It does not exist in dash (/bin/sh).
# This script must be run with bash explicitly.
check_endpoint() {
local host="$1"
local port="${2:-443}"
local url="$3"
if timeout 5 bash -c "echo >/dev/tcp/$host/$port" 2>/dev/null; then
pass "Reachable: $url (TCP:$port)"
return 0
else
fail "Cannot reach $url (TCP:$port)"
warn "Verify outbound HTTPS is permitted to $host on port $port"
return 1
fi
}
NETWORK_OK=true
check_endpoint "huntress.io" 443 "https://huntress.io" || NETWORK_OK=false
check_endpoint "huntresscdn.com" 443 "https://huntresscdn.com" || NETWORK_OK=false
check_endpoint "bugsnag.com" 443 "https://bugsnag.com" || NETWORK_OK=false
check_endpoint "s3.amazonaws.com" 443 "https://s3.amazonaws.com" || NETWORK_OK=false
if ! $NETWORK_OK; then
die "One or more Huntress network endpoints are unreachable. Verify firewall rules and retry."
fi
# ─── Check for Existing Installation ─────────────────────────────────────────
print_header "Existing Installation Check"
if systemctl is-active --quiet huntress-agent 2>/dev/null; then
warn "Huntress agent service is already active on this host."
info "If you need to re-register with a different org key, stop the service, remove the agent, and re-run."
info "Exiting without making changes."
exit 0
fi
info "No active Huntress agent detected — proceeding with installation."
# ─── Download Installer ───────────────────────────────────────────────────────
print_header "Downloading Huntress Installer"
# Check that curl is available
if ! command -v curl &>/dev/null; then
die "curl is not installed. Install it and retry. (e.g., apt-get install -y curl)"
fi
INSTALLER_TMP=$(mktemp /tmp/huntress-install-XXXXXX.sh)
trap 'rm -f "$INSTALLER_TMP"' EXIT # always clean up on exit
info "Downloading from: $INSTALLER_URL"
if curl -fsSL --connect-timeout 10 --max-time 60 "$INSTALLER_URL" -o "$INSTALLER_TMP"; then
pass "Installer downloaded successfully"
else
die "Failed to download Huntress installer from $INSTALLER_URL — check connectivity and retry."
fi
chmod +x "$INSTALLER_TMP"
# ─── Run Installation ─────────────────────────────────────────────────────────
print_header "Running Huntress Installer"
info "Account key: [REDACTED — not logged for security]"
info "Org key: $ORG_KEY"
echo ""
# The Huntress installer accepts:
# -a account key (your MSP/partner account key)
# -o organization key (per-client identifier, spaces allowed)
if bash "$INSTALLER_TMP" -a "$ACCOUNT_KEY" -o "$ORG_KEY"; then
pass "Huntress installer completed without errors"
else
INSTALLER_EXIT=$?
fail "Huntress installer exited with code $INSTALLER_EXIT"
info "Review the installer output above for details."
exit 2
fi
# ─── Post-Install Verification ────────────────────────────────────────────────
print_header "Post-Install Verification"
# Give the service a moment to start before checking
sleep 3
if systemctl is-active --quiet huntress-agent; then
pass "huntress-agent service is active and running"
else
fail "huntress-agent service is not active after installation"
info "Check service status: systemctl status huntress-agent"
info "Check logs: journalctl -u huntress-agent -n 50"
exit 3
fi
# Verify the agent binary is present
AGENT_BIN="/opt/huntress/bin/huntress"
if [[ -x "$AGENT_BIN" ]]; then
pass "Agent binary present: $AGENT_BIN"
else
warn "Agent binary not found at expected path $AGENT_BIN — may vary by distro/version"
fi
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
pass "DEPLOYMENT COMPLETE"
echo " Host : $(hostname -f)"
echo " OS : $OS_PRETTY"
echo " Kernel : $KERNEL_VERSION"
echo " Org Key : $ORG_KEY"
echo " The agent will appear in the Huntress portal within a few minutes."
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
exit 0
Additional Resources
- Huntress: Linux Installation and System Requirements
- Bash manual: Redirections and /dev/tcp — covers the
/dev/tcpbuilt-in and why it requires bash - POSIX: Shell command language — why
/dev/tcpdoes not exist in POSIX sh (dash)
