17 Commits

Author SHA1 Message Date
DECNET CI
499836c9e4 chore: auto-release v0.2 [skip ci] 2026-04-13 11:50:02 +00:00
bb9c782c41 Merge pull request 'tofix/merge-testing-to-main' (#6) from tofix/merge-testing-to-main into main
Some checks failed
Release / Auto-tag release (push) Successful in 16s
Release / Build, scan & push conpot (push) Failing after 4m22s
Release / Build, scan & push elasticsearch (push) Failing after 4m37s
Release / Build, scan & push llmnr (push) Failing after 4m32s
Release / Build, scan & push mongodb (push) Failing after 4m35s
Release / Build, scan & push ldap (push) Failing after 4m44s
Release / Build, scan & push docker_api (push) Failing after 4m57s
Release / Build, scan & push imap (push) Failing after 4m50s
Release / Build, scan & push http (push) Failing after 4m59s
Release / Build, scan & push mssql (push) Failing after 4m28s
Release / Build, scan & push mqtt (push) Failing after 4m38s
Release / Build, scan & push ftp (push) Failing after 5m8s
Release / Build, scan & push k8s (push) Failing after 5m3s
Release / Build, scan & push mysql (push) Failing after 1m56s
Release / Build, scan & push redis (push) Has started running
Release / Build, scan & push rdp (push) Has been cancelled
Release / Build, scan & push pop3 (push) Has been cancelled
Release / Build, scan & push postgres (push) Has been cancelled
Release / Build, scan & push sip (push) Has started running
Release / Build, scan & push smb (push) Has started running
Release / Build, scan & push smtp (push) Has started running
Release / Build, scan & push snmp (push) Has started running
Release / Build, scan & push ssh (push) Has started running
Release / Build, scan & push telnet (push) Has started running
Release / Build, scan & push tftp (push) Has started running
Release / Build, scan & push vnc (push) Has started running
Reviewed-on: #6
2026-04-13 13:49:47 +02:00
597854cc06 Merge branch 'merge/testing-to-main' into tofix/merge-testing-to-main
Some checks failed
PR Gate / Lint (ruff) (pull_request) Successful in 17s
PR Gate / SAST (bandit) (pull_request) Successful in 23s
PR Gate / Dependency audit (pip-audit) (pull_request) Successful in 36s
PR Gate / Test (pytest) (3.12) (pull_request) Failing after 1m0s
PR Gate / Test (pytest) (3.11) (pull_request) Failing after 1m10s
2026-04-13 07:48:43 -04:00
3b4b0a1016 merge: resolve conflicts between testing and main (remove tracked settings, fix pyproject deps) 2026-04-13 07:48:37 -04:00
DECNET CI
8ad3350d51 ci: auto-merge dev → testing [skip ci] 2026-04-13 05:55:46 +00:00
23ec470988 Merge pull request 'fix/merge-testing-to-main' (#4) from fix/merge-testing-to-main into main
Some checks failed
Release / Auto-tag release (push) Failing after 8s
Release / Build, scan & push cowrie (push) Has been skipped
Release / Build, scan & push docker_api (push) Has been skipped
Release / Build, scan & push elasticsearch (push) Has been skipped
Release / Build, scan & push ftp (push) Has been skipped
Release / Build, scan & push http (push) Has been skipped
Release / Build, scan & push imap (push) Has been skipped
Release / Build, scan & push k8s (push) Has been skipped
Release / Build, scan & push ldap (push) Has been skipped
Release / Build, scan & push llmnr (push) Has been skipped
Release / Build, scan & push mongodb (push) Has been skipped
Release / Build, scan & push mqtt (push) Has been skipped
Release / Build, scan & push mssql (push) Has been skipped
Release / Build, scan & push mysql (push) Has been skipped
Release / Build, scan & push pop3 (push) Has been skipped
Release / Build, scan & push postgres (push) Has been skipped
Release / Build, scan & push rdp (push) Has been skipped
Release / Build, scan & push real_ssh (push) Has been skipped
Release / Build, scan & push redis (push) Has been skipped
Release / Build, scan & push sip (push) Has been skipped
Release / Build, scan & push smb (push) Has been skipped
Release / Build, scan & push smtp (push) Has been skipped
Release / Build, scan & push snmp (push) Has been skipped
Release / Build, scan & push tftp (push) Has been skipped
Release / Build, scan & push vnc (push) Has been skipped
Reviewed-on: #4
2026-04-12 10:10:19 +02:00
4064e19af1 merge: resolve conflicts between testing and main
Some checks failed
PR Gate / Lint (ruff) (pull_request) Failing after 11s
PR Gate / Test (pytest) (3.11) (pull_request) Failing after 10s
PR Gate / Test (pytest) (3.12) (pull_request) Failing after 10s
PR Gate / SAST (bandit) (pull_request) Successful in 12s
PR Gate / Dependency audit (pip-audit) (pull_request) Failing after 13s
2026-04-12 04:09:17 -04:00
DECNET CI
ac4e5e1570 ci: auto-merge dev → testing
All checks were successful
CI / Lint (ruff) (push) Successful in 11s
CI / Test (pytest) (3.11) (push) Successful in 1m9s
CI / Test (pytest) (3.12) (push) Successful in 1m14s
CI / SAST (bandit) (push) Successful in 12s
CI / Dependency audit (pip-audit) (push) Successful in 21s
CI / Merge dev → testing (push) Has been skipped
CI / Open PR to main (push) Successful in 6s
PR Gate / Lint (ruff) (pull_request) Successful in 11s
PR Gate / Test (pytest) (3.11) (pull_request) Successful in 1m13s
PR Gate / Test (pytest) (3.12) (pull_request) Successful in 1m12s
PR Gate / SAST (bandit) (pull_request) Successful in 13s
PR Gate / Dependency audit (pip-audit) (pull_request) Successful in 21s
2026-04-12 07:53:07 +00:00
eb40be2161 chore: split dev and normal dependencies in pyproject.toml 2026-04-08 00:09:15 -04:00
0927d9e1e8 Modified: DEVELOPMENT.md 2026-04-06 12:03:36 -04:00
9c81fb4739 revert f64c251a9e
revert revert f8a9f8fc64

revert Added: modified notes. Finished CI/CD pipeline.
2026-04-06 18:02:28 +02:00
e4171789a8 Added: documentation about the deaddeck archetype and how to run it. 2026-04-06 11:51:24 -04:00
f64c251a9e revert f8a9f8fc64
revert Added: modified notes. Finished CI/CD pipeline.
2026-04-06 17:15:32 +02:00
c56c9fe667 Merge pull request 'Auto PR: dev → main' (#2) from dev into main
Some checks failed
Release / Auto-tag release (push) Successful in 14s
Release / Build, scan & push cowrie (push) Failing after 41s
Release / Build, scan & push docker_api (push) Failing after 30s
Release / Build, scan & push elasticsearch (push) Failing after 30s
Release / Build, scan & push ftp (push) Failing after 32s
Release / Build, scan & push http (push) Failing after 32s
Release / Build, scan & push imap (push) Failing after 31s
Release / Build, scan & push k8s (push) Failing after 32s
Release / Build, scan & push ldap (push) Failing after 30s
Release / Build, scan & push llmnr (push) Failing after 33s
Release / Build, scan & push mongodb (push) Failing after 32s
Release / Build, scan & push mqtt (push) Failing after 33s
Release / Build, scan & push mssql (push) Failing after 31s
Release / Build, scan & push mysql (push) Failing after 33s
Release / Build, scan & push pop3 (push) Failing after 33s
Release / Build, scan & push postgres (push) Failing after 32s
Release / Build, scan & push rdp (push) Failing after 32s
Release / Build, scan & push real_ssh (push) Failing after 33s
Release / Build, scan & push redis (push) Failing after 33s
Release / Build, scan & push sip (push) Failing after 33s
Release / Build, scan & push smb (push) Failing after 31s
Release / Build, scan & push smtp (push) Failing after 31s
Release / Build, scan & push snmp (push) Failing after 31s
Release / Build, scan & push tftp (push) Failing after 31s
Release / Build, scan & push vnc (push) Failing after 33s
Reviewed-on: #2
2026-04-06 17:11:54 +02:00
897f498bcd Merge dev into main: resolve conflicts, keep tests out of main
Some checks failed
Release / Auto-tag release (push) Successful in 14s
Release / Build, scan & push cowrie (push) Failing after 6m9s
Release / Build, scan & push docker_api (push) Failing after 31s
Release / Build, scan & push elasticsearch (push) Failing after 30s
Release / Build, scan & push ftp (push) Failing after 30s
Release / Build, scan & push http (push) Failing after 33s
Release / Build, scan & push imap (push) Failing after 30s
Release / Build, scan & push k8s (push) Failing after 30s
Release / Build, scan & push ldap (push) Failing after 33s
Release / Build, scan & push llmnr (push) Failing after 29s
Release / Build, scan & push mongodb (push) Failing after 30s
Release / Build, scan & push mqtt (push) Failing after 30s
Release / Build, scan & push mssql (push) Failing after 30s
Release / Build, scan & push mysql (push) Failing after 30s
Release / Build, scan & push pop3 (push) Failing after 32s
Release / Build, scan & push postgres (push) Failing after 29s
Release / Build, scan & push rdp (push) Failing after 29s
Release / Build, scan & push real_ssh (push) Failing after 31s
Release / Build, scan & push redis (push) Failing after 29s
Release / Build, scan & push sip (push) Failing after 30s
Release / Build, scan & push smb (push) Failing after 32s
Release / Build, scan & push smtp (push) Failing after 31s
Release / Build, scan & push snmp (push) Failing after 29s
Release / Build, scan & push tftp (push) Failing after 29s
Release / Build, scan & push vnc (push) Failing after 30s
2026-04-04 18:00:17 -03:00
92e06cb193 Add release workflow for auto-tagging and Docker image builds
Some checks failed
Release / Auto-tag release (push) Failing after 3s
Release / Build & push cowrie (push) Has been skipped
Release / Build & push docker_api (push) Has been skipped
Release / Build & push elasticsearch (push) Has been skipped
Release / Build & push ftp (push) Has been skipped
Release / Build & push http (push) Has been skipped
Release / Build & push imap (push) Has been skipped
Release / Build & push k8s (push) Has been skipped
Release / Build & push ldap (push) Has been skipped
Release / Build & push llmnr (push) Has been skipped
Release / Build & push mongodb (push) Has been skipped
Release / Build & push mqtt (push) Has been skipped
Release / Build & push mssql (push) Has been skipped
Release / Build & push mysql (push) Has been skipped
Release / Build & push pop3 (push) Has been skipped
Release / Build & push postgres (push) Has been skipped
Release / Build & push rdp (push) Has been skipped
Release / Build & push real_ssh (push) Has been skipped
Release / Build & push redis (push) Has been skipped
Release / Build & push sip (push) Has been skipped
Release / Build & push smb (push) Has been skipped
Release / Build & push smtp (push) Has been skipped
Release / Build & push snmp (push) Has been skipped
Release / Build & push tftp (push) Has been skipped
Release / Build & push vnc (push) Has been skipped
2026-04-04 17:16:53 -03:00
7ad7e1e53b main: remove tests and pytest dependency 2026-04-04 16:28:33 -03:00
178 changed files with 834 additions and 22645 deletions

View File

@@ -33,13 +33,13 @@ jobs:
id: version id: version
run: | run: |
# Calculate next version (v0.x) # Calculate next version (v0.x)
LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0.0") LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0")
NEXT_VER=$(python3 -c " NEXT_VER=$(python3 -c "
tag = '$LATEST_TAG'.lstrip('v') tag = '$LATEST_TAG'.lstrip('v')
parts = tag.split('.') parts = tag.split('.')
major = int(parts[0]) if parts[0] else 0 major = int(parts[0]) if parts[0] else 0
minor = int(parts[1]) if len(parts) > 1 else 0 minor = int(parts[1]) if len(parts) > 1 else 0
print(f'{major}.{minor + 1}.0') print(f'{major}.{minor + 1}')
") ")
echo "Next version: $NEXT_VER (calculated from $LATEST_TAG)" echo "Next version: $NEXT_VER (calculated from $LATEST_TAG)"
@@ -49,11 +49,7 @@ jobs:
git add pyproject.toml git add pyproject.toml
git commit -m "chore: auto-release v$NEXT_VER [skip ci]" || echo "No changes to commit" git commit -m "chore: auto-release v$NEXT_VER [skip ci]" || echo "No changes to commit"
CHANGELOG=$(git log ${LATEST_TAG}..HEAD --oneline --no-decorate --no-merges) git tag -a "v$NEXT_VER" -m "Auto-release v$NEXT_VER"
git tag -a "v$NEXT_VER" -m "Auto-release v$NEXT_VER
Changes since $LATEST_TAG:
$CHANGELOG"
git push origin main --follow-tags git push origin main --follow-tags
echo "version=$NEXT_VER" >> $GITHUB_OUTPUT echo "version=$NEXT_VER" >> $GITHUB_OUTPUT
@@ -115,13 +111,13 @@ $CHANGELOG"
cache-from: type=gha cache-from: type=gha
cache-to: type=gha,mode=max cache-to: type=gha,mode=max
- name: Install Trivy
run: |
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin
- name: Scan with Trivy - name: Scan with Trivy
run: | uses: aquasecurity/trivy-action@master
trivy image --exit-code 1 --severity CRITICAL --ignore-unfixed decnet-${{ matrix.service }}:scan with:
image-ref: decnet-${{ matrix.service }}:scan
exit-code: "1"
severity: CRITICAL
ignore-unfixed: true
- name: Push image - name: Push image
if: success() if: success()

3
.gitignore vendored
View File

@@ -10,6 +10,7 @@ build/
decnet-compose.yml decnet-compose.yml
decnet-state.json decnet-state.json
*.ini *.ini
.env
decnet.log* decnet.log*
*.loggy *.loggy
*.nmap *.nmap
@@ -18,7 +19,7 @@ webmail
windows1 windows1
*.db *.db
decnet.json decnet.json
.env* .env
.env.local .env.local
.coverage .coverage
.hypothesis/ .hypothesis/

View File

@@ -180,6 +180,7 @@ Archetypes are pre-packaged machine identities. One slug sets services, preferre
| Slug | Services | OS Fingerprint | Description | | Slug | Services | OS Fingerprint | Description |
|---|---|---|---| |---|---|---|---|
| `deaddeck` | ssh | linux | Initial machine to be exploited. Real SSH container. |
| `windows-workstation` | smb, rdp | windows | Corporate Windows desktop | | `windows-workstation` | smb, rdp | windows | Corporate Windows desktop |
| `windows-server` | smb, rdp, ldap | windows | Windows domain member | | `windows-server` | smb, rdp, ldap | windows | Windows domain member |
| `domain-controller` | ldap, smb, rdp, llmnr | windows | Active Directory DC | | `domain-controller` | ldap, smb, rdp, llmnr | windows | Active Directory DC |
@@ -270,6 +271,11 @@ List live at any time with `decnet services`.
Most services accept persona configuration to make honeypot responses more convincing. Config is passed via INI subsections (`[decky-name.service]`) or the `service_config` field in code. Most services accept persona configuration to make honeypot responses more convincing. Config is passed via INI subsections (`[decky-name.service]`) or the `service_config` field in code.
```ini ```ini
[deaddeck-1]
amount=1
archetype=deaddeck
ssh.password=admin
[decky-webmail.http] [decky-webmail.http]
server_header = Apache/2.4.54 (Debian) server_header = Apache/2.4.54 (Debian)
fake_app = wordpress fake_app = wordpress

1
decnet.collector.log Normal file
View File

@@ -0,0 +1 @@
Collector starting → /home/anti/Tools/DECNET/decnet.log

View File

@@ -15,7 +15,6 @@ import typer
from rich.console import Console from rich.console import Console
from rich.table import Table from rich.table import Table
from decnet.logging import get_logger
from decnet.env import ( from decnet.env import (
DECNET_API_HOST, DECNET_API_HOST,
DECNET_API_PORT, DECNET_API_PORT,
@@ -33,24 +32,6 @@ from decnet.ini_loader import load_ini
from decnet.network import detect_interface, detect_subnet, allocate_ips, get_host_ip from decnet.network import detect_interface, detect_subnet, allocate_ips, get_host_ip
from decnet.services.registry import all_services from decnet.services.registry import all_services
log = get_logger("cli")
def _daemonize() -> None:
"""Fork the current process into a background daemon (Unix double-fork)."""
import os
import sys
if os.fork() > 0:
raise SystemExit(0)
os.setsid()
if os.fork() > 0:
raise SystemExit(0)
sys.stdout = open(os.devnull, "w") # noqa: SIM115
sys.stderr = open(os.devnull, "w") # noqa: SIM115
sys.stdin = open(os.devnull, "r") # noqa: SIM115
app = typer.Typer( app = typer.Typer(
name="decnet", name="decnet",
help="Deploy a deception network of honeypot deckies on your LAN.", help="Deploy a deception network of honeypot deckies on your LAN.",
@@ -59,23 +40,30 @@ app = typer.Typer(
console = Console() console = Console()
def _kill_all_services() -> None: def _kill_api() -> None:
"""Find and kill all running DECNET microservice processes.""" """Find and kill any running DECNET API (uvicorn) or mutator processes."""
import psutil
import os import os
registry = _service_registry(str(DECNET_INGEST_LOG_FILE)) _killed: bool = False
killed = 0 for _proc in psutil.process_iter(['pid', 'name', 'cmdline']):
for name, match_fn, _launch_args in registry: try:
pid = _is_running(match_fn) _cmd = _proc.info['cmdline']
if pid is not None: if not _cmd:
console.print(f"[yellow]Stopping {name} (PID {pid})...[/]") continue
os.kill(pid, signal.SIGTERM) if "uvicorn" in _cmd and "decnet.web.api:app" in _cmd:
killed += 1 console.print(f"[yellow]Stopping DECNET API (PID {_proc.info['pid']})...[/]")
os.kill(_proc.info['pid'], signal.SIGTERM)
_killed = True
elif "decnet.cli" in _cmd and "mutate" in _cmd and "--watch" in _cmd:
console.print(f"[yellow]Stopping DECNET Mutator Watcher (PID {_proc.info['pid']})...[/]")
os.kill(_proc.info['pid'], signal.SIGTERM)
_killed = True
except (psutil.NoSuchProcess, psutil.AccessDenied):
continue
if killed: if _killed:
console.print(f"[green]{killed} background process(es) stopped.[/]") console.print("[green]Background processes stopped.[/]")
else:
console.print("[dim]No DECNET services were running.[/]")
@app.command() @app.command()
@@ -83,18 +71,12 @@ def api(
port: int = typer.Option(DECNET_API_PORT, "--port", help="Port for the backend API"), port: int = typer.Option(DECNET_API_PORT, "--port", help="Port for the backend API"),
host: str = typer.Option(DECNET_API_HOST, "--host", help="Host IP for the backend API"), host: str = typer.Option(DECNET_API_HOST, "--host", help="Host IP for the backend API"),
log_file: str = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", help="Path to the DECNET log file to monitor"), log_file: str = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", help="Path to the DECNET log file to monitor"),
daemon: bool = typer.Option(False, "--daemon", "-d", help="Detach to background as a daemon process"),
) -> None: ) -> None:
"""Run the DECNET API and Web Dashboard in standalone mode.""" """Run the DECNET API and Web Dashboard in standalone mode."""
import subprocess # nosec B404 import subprocess # nosec B404
import sys import sys
import os import os
if daemon:
log.info("API daemonizing host=%s port=%d", host, port)
_daemonize()
log.info("API command invoked host=%s port=%d", host, port)
console.print(f"[green]Starting DECNET API on {host}:{port}...[/]") console.print(f"[green]Starting DECNET API on {host}:{port}...[/]")
_env: dict[str, str] = os.environ.copy() _env: dict[str, str] = os.environ.copy()
_env["DECNET_INGEST_LOG_FILE"] = str(log_file) _env["DECNET_INGEST_LOG_FILE"] = str(log_file)
@@ -130,16 +112,9 @@ def deploy(
config_file: Optional[str] = typer.Option(None, "--config", "-c", help="Path to INI config file"), config_file: Optional[str] = typer.Option(None, "--config", "-c", help="Path to INI config file"),
api: bool = typer.Option(False, "--api", help="Start the FastAPI backend to ingest and serve logs"), api: bool = typer.Option(False, "--api", help="Start the FastAPI backend to ingest and serve logs"),
api_port: int = typer.Option(8000, "--api-port", help="Port for the backend API"), api_port: int = typer.Option(8000, "--api-port", help="Port for the backend API"),
daemon: bool = typer.Option(False, "--daemon", help="Detach to background as a daemon process"),
) -> None: ) -> None:
"""Deploy deckies to the LAN.""" """Deploy deckies to the LAN."""
import os import os
if daemon:
log.info("deploy daemonizing mode=%s deckies=%s", mode, deckies)
_daemonize()
log.info("deploy command invoked mode=%s deckies=%s dry_run=%s", mode, deckies, dry_run)
if mode not in ("unihost", "swarm"): if mode not in ("unihost", "swarm"):
console.print("[red]--mode must be 'unihost' or 'swarm'[/]") console.print("[red]--mode must be 'unihost' or 'swarm'[/]")
raise typer.Exit(1) raise typer.Exit(1)
@@ -259,13 +234,8 @@ def deploy(
mutate_interval=mutate_interval, mutate_interval=mutate_interval,
) )
log.debug("deploy: config built deckies=%d interface=%s subnet=%s", len(config.deckies), config.interface, config.subnet)
from decnet.engine import deploy as _deploy from decnet.engine import deploy as _deploy
_deploy(config, dry_run=dry_run, no_cache=no_cache, parallel=parallel) _deploy(config, dry_run=dry_run, no_cache=no_cache, parallel=parallel)
if dry_run:
log.info("deploy: dry-run complete, no containers started")
else:
log.info("deploy: deployment complete deckies=%d", len(config.deckies))
if mutate_interval is not None and not dry_run: if mutate_interval is not None and not dry_run:
import subprocess # nosec B404 import subprocess # nosec B404
@@ -290,7 +260,7 @@ def deploy(
subprocess.Popen( # nosec B603 subprocess.Popen( # nosec B603
[sys.executable, "-m", "decnet.cli", "collect", "--log-file", str(effective_log_file)], [sys.executable, "-m", "decnet.cli", "collect", "--log-file", str(effective_log_file)],
stdin=subprocess.DEVNULL, stdin=subprocess.DEVNULL,
stdout=open(_collector_err, "a"), stdout=open(_collector_err, "a"), # nosec B603
stderr=subprocess.STDOUT, stderr=subprocess.STDOUT,
start_new_session=True, start_new_session=True,
) )
@@ -312,198 +282,14 @@ def deploy(
except (FileNotFoundError, subprocess.SubprocessError): except (FileNotFoundError, subprocess.SubprocessError):
console.print("[red]Failed to start API. Ensure 'uvicorn' is installed in the current environment.[/]") console.print("[red]Failed to start API. Ensure 'uvicorn' is installed in the current environment.[/]")
if effective_log_file and not dry_run:
import subprocess # nosec B404
import sys
console.print("[bold cyan]Starting DECNET-PROBER[/] (auto-discovers attackers from log stream)")
try:
_prober_args = [
sys.executable, "-m", "decnet.cli", "probe",
"--daemon",
"--log-file", str(effective_log_file),
]
subprocess.Popen( # nosec B603
_prober_args,
stdin=subprocess.DEVNULL,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
start_new_session=True,
)
except (FileNotFoundError, subprocess.SubprocessError):
console.print("[red]Failed to start DECNET-PROBER.[/]")
if effective_log_file and not dry_run:
import subprocess # nosec B404
import sys
console.print("[bold cyan]Starting DECNET-PROFILER[/] (builds attacker profiles from log stream)")
try:
subprocess.Popen( # nosec B603
[sys.executable, "-m", "decnet.cli", "profiler", "--daemon"],
stdin=subprocess.DEVNULL,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
start_new_session=True,
)
except (FileNotFoundError, subprocess.SubprocessError):
console.print("[red]Failed to start DECNET-PROFILER.[/]")
if effective_log_file and not dry_run:
import subprocess # nosec B404
import sys
console.print("[bold cyan]Starting DECNET-SNIFFER[/] (passive network capture)")
try:
subprocess.Popen( # nosec B603
[sys.executable, "-m", "decnet.cli", "sniffer",
"--daemon",
"--log-file", str(effective_log_file)],
stdin=subprocess.DEVNULL,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
start_new_session=True,
)
except (FileNotFoundError, subprocess.SubprocessError):
console.print("[red]Failed to start DECNET-SNIFFER.[/]")
def _is_running(match_fn) -> int | None:
"""Return PID of a running DECNET process matching ``match_fn(cmdline)``, or None."""
import psutil
for proc in psutil.process_iter(["pid", "cmdline"]):
try:
cmd = proc.info["cmdline"]
if cmd and match_fn(cmd):
return proc.info["pid"]
except (psutil.NoSuchProcess, psutil.AccessDenied):
continue
return None
# Each entry: (display_name, detection_fn, launch_args_fn)
# launch_args_fn receives log_file and returns the Popen argv list.
def _service_registry(log_file: str) -> list[tuple[str, callable, list[str]]]:
"""Return the microservice registry for health-check and relaunch."""
import sys
_py = sys.executable
return [
(
"Collector",
lambda cmd: "decnet.cli" in cmd and "collect" in cmd,
[_py, "-m", "decnet.cli", "collect", "--daemon", "--log-file", log_file],
),
(
"Mutator",
lambda cmd: "decnet.cli" in cmd and "mutate" in cmd and "--watch" in cmd,
[_py, "-m", "decnet.cli", "mutate", "--daemon", "--watch"],
),
(
"Prober",
lambda cmd: "decnet.cli" in cmd and "probe" in cmd,
[_py, "-m", "decnet.cli", "probe", "--daemon", "--log-file", log_file],
),
(
"Profiler",
lambda cmd: "decnet.cli" in cmd and "profiler" in cmd,
[_py, "-m", "decnet.cli", "profiler", "--daemon"],
),
(
"Sniffer",
lambda cmd: "decnet.cli" in cmd and "sniffer" in cmd,
[_py, "-m", "decnet.cli", "sniffer", "--daemon", "--log-file", log_file],
),
(
"API",
lambda cmd: "uvicorn" in cmd and "decnet.web.api:app" in cmd,
[_py, "-m", "uvicorn", "decnet.web.api:app",
"--host", DECNET_API_HOST, "--port", str(DECNET_API_PORT)],
),
]
@app.command()
def redeploy(
log_file: str = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", "-f", help="Path to the DECNET log file"),
) -> None:
"""Check running DECNET services and relaunch any that are down."""
import subprocess # nosec B404
log.info("redeploy: checking services")
registry = _service_registry(str(log_file))
table = Table(title="DECNET Services", show_lines=True)
table.add_column("Service", style="bold cyan")
table.add_column("Status")
table.add_column("PID", style="dim")
table.add_column("Action")
relaunched = 0
for name, match_fn, launch_args in registry:
pid = _is_running(match_fn)
if pid is not None:
table.add_row(name, "[green]UP[/]", str(pid), "")
else:
try:
subprocess.Popen( # nosec B603
launch_args,
stdin=subprocess.DEVNULL,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
start_new_session=True,
)
table.add_row(name, "[red]DOWN[/]", "", "[green]relaunched[/]")
relaunched += 1
except (FileNotFoundError, subprocess.SubprocessError) as exc:
table.add_row(name, "[red]DOWN[/]", "", f"[red]failed: {exc}[/]")
console.print(table)
if relaunched:
console.print(f"[green]{relaunched} service(s) relaunched.[/]")
else:
console.print("[green]All services running.[/]")
@app.command()
def probe(
log_file: str = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", "-f", help="Path for RFC 5424 syslog + .json output (reads attackers from .json, writes results to both)"),
interval: int = typer.Option(300, "--interval", "-i", help="Seconds between probe cycles (default: 300)"),
timeout: float = typer.Option(5.0, "--timeout", help="Per-probe TCP timeout in seconds"),
daemon: bool = typer.Option(False, "--daemon", "-d", help="Detach to background (used by deploy, no console output)"),
) -> None:
"""Fingerprint attackers (JARM + HASSH + TCP/IP stack) discovered in the log stream."""
import asyncio
from decnet.prober import prober_worker
if daemon:
log.info("probe daemonizing log_file=%s interval=%d", log_file, interval)
_daemonize()
asyncio.run(prober_worker(log_file, interval=interval, timeout=timeout))
return
else:
log.info("probe command invoked log_file=%s interval=%d", log_file, interval)
console.print(f"[bold cyan]DECNET-PROBER[/] watching {log_file} for attackers (interval: {interval}s)")
console.print("[dim]Press Ctrl+C to stop[/]")
try:
asyncio.run(prober_worker(log_file, interval=interval, timeout=timeout))
except KeyboardInterrupt:
console.print("\n[yellow]DECNET-PROBER stopped.[/]")
@app.command() @app.command()
def collect( def collect(
log_file: str = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", "-f", help="Path to write RFC 5424 syslog lines and .json records"), log_file: str = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", "-f", help="Path to write RFC 5424 syslog lines and .json records"),
daemon: bool = typer.Option(False, "--daemon", "-d", help="Detach to background as a daemon process"),
) -> None: ) -> None:
"""Stream Docker logs from all running decky service containers to a log file.""" """Stream Docker logs from all running decky service containers to a log file."""
import asyncio import asyncio
from decnet.collector import log_collector_worker from decnet.collector import log_collector_worker
if daemon:
log.info("collect daemonizing log_file=%s", log_file)
_daemonize()
log.info("collect command invoked log_file=%s", log_file)
console.print(f"[bold cyan]Collector starting[/] → {log_file}") console.print(f"[bold cyan]Collector starting[/] → {log_file}")
asyncio.run(log_collector_worker(log_file)) asyncio.run(log_collector_worker(log_file))
@@ -511,19 +297,14 @@ def collect(
@app.command() @app.command()
def mutate( def mutate(
watch: bool = typer.Option(False, "--watch", "-w", help="Run continuously and mutate deckies according to their interval"), watch: bool = typer.Option(False, "--watch", "-w", help="Run continuously and mutate deckies according to their interval"),
decky_name: Optional[str] = typer.Option(None, "--decky", help="Force mutate a specific decky immediately"), decky_name: Optional[str] = typer.Option(None, "--decky", "-d", help="Force mutate a specific decky immediately"),
force_all: bool = typer.Option(False, "--all", help="Force mutate all deckies immediately"), force_all: bool = typer.Option(False, "--all", help="Force mutate all deckies immediately"),
daemon: bool = typer.Option(False, "--daemon", "-d", help="Detach to background as a daemon process"),
) -> None: ) -> None:
"""Manually trigger or continuously watch for decky mutation.""" """Manually trigger or continuously watch for decky mutation."""
import asyncio import asyncio
from decnet.mutator import mutate_decky, mutate_all, run_watch_loop from decnet.mutator import mutate_decky, mutate_all, run_watch_loop
from decnet.web.dependencies import repo from decnet.web.dependencies import repo
if daemon:
log.info("mutate daemonizing watch=%s", watch)
_daemonize()
async def _run() -> None: async def _run() -> None:
await repo.initialize() await repo.initialize()
if watch: if watch:
@@ -541,25 +322,9 @@ def mutate(
@app.command() @app.command()
def status() -> None: def status() -> None:
"""Show running deckies and their status.""" """Show running deckies and their status."""
log.info("status command invoked")
from decnet.engine import status as _status from decnet.engine import status as _status
_status() _status()
registry = _service_registry(str(DECNET_INGEST_LOG_FILE))
svc_table = Table(title="DECNET Services", show_lines=True)
svc_table.add_column("Service", style="bold cyan")
svc_table.add_column("Status")
svc_table.add_column("PID", style="dim")
for name, match_fn, _launch_args in registry:
pid = _is_running(match_fn)
if pid is not None:
svc_table.add_row(name, "[green]UP[/]", str(pid))
else:
svc_table.add_row(name, "[red]DOWN[/]", "")
console.print(svc_table)
@app.command() @app.command()
def teardown( def teardown(
@@ -571,13 +336,11 @@ def teardown(
console.print("[red]Specify --all or --id <name>.[/]") console.print("[red]Specify --all or --id <name>.[/]")
raise typer.Exit(1) raise typer.Exit(1)
log.info("teardown command invoked all=%s id=%s", all_, id_)
from decnet.engine import teardown as _teardown from decnet.engine import teardown as _teardown
_teardown(decky_id=id_) _teardown(decky_id=id_)
log.info("teardown complete all=%s id=%s", all_, id_)
if all_: if all_:
_kill_all_services() _kill_api()
@app.command(name="services") @app.command(name="services")
@@ -611,7 +374,6 @@ def correlate(
min_deckies: int = typer.Option(2, "--min-deckies", "-m", help="Minimum number of distinct deckies an IP must touch to be reported"), min_deckies: int = typer.Option(2, "--min-deckies", "-m", help="Minimum number of distinct deckies an IP must touch to be reported"),
output: str = typer.Option("table", "--output", "-o", help="Output format: table | json | syslog"), output: str = typer.Option("table", "--output", "-o", help="Output format: table | json | syslog"),
emit_syslog: bool = typer.Option(False, "--emit-syslog", help="Also print traversal events as RFC 5424 lines (for SIEM piping)"), emit_syslog: bool = typer.Option(False, "--emit-syslog", help="Also print traversal events as RFC 5424 lines (for SIEM piping)"),
daemon: bool = typer.Option(False, "--daemon", "-d", help="Detach to background as a daemon process"),
) -> None: ) -> None:
"""Analyse logs for cross-decky traversals and print the attacker movement graph.""" """Analyse logs for cross-decky traversals and print the attacker movement graph."""
import sys import sys
@@ -619,10 +381,6 @@ def correlate(
from pathlib import Path from pathlib import Path
from decnet.correlation.engine import CorrelationEngine from decnet.correlation.engine import CorrelationEngine
if daemon:
log.info("correlate daemonizing log_file=%s", log_file)
_daemonize()
engine = CorrelationEngine() engine = CorrelationEngine()
if log_file: if log_file:
@@ -687,15 +445,8 @@ def list_archetypes() -> None:
def serve_web( def serve_web(
web_port: int = typer.Option(DECNET_WEB_PORT, "--web-port", help="Port to serve the DECNET Web Dashboard"), web_port: int = typer.Option(DECNET_WEB_PORT, "--web-port", help="Port to serve the DECNET Web Dashboard"),
host: str = typer.Option(DECNET_WEB_HOST, "--host", help="Host IP to serve the Web Dashboard"), host: str = typer.Option(DECNET_WEB_HOST, "--host", help="Host IP to serve the Web Dashboard"),
api_port: int = typer.Option(DECNET_API_PORT, "--api-port", help="Port the DECNET API is listening on"),
daemon: bool = typer.Option(False, "--daemon", "-d", help="Detach to background as a daemon process"),
) -> None: ) -> None:
"""Serve the DECNET Web Dashboard frontend. """Serve the DECNET Web Dashboard frontend."""
Proxies /api/* requests to the API server so the frontend can use
relative URLs (/api/v1/...) with no CORS configuration required.
"""
import http.client
import http.server import http.server
import socketserver import socketserver
from pathlib import Path from pathlib import Path
@@ -706,267 +457,22 @@ def serve_web(
console.print(f"[red]Frontend build not found at {dist_dir}. Make sure you run 'npm run build' inside 'decnet_web'.[/]") console.print(f"[red]Frontend build not found at {dist_dir}. Make sure you run 'npm run build' inside 'decnet_web'.[/]")
raise typer.Exit(1) raise typer.Exit(1)
if daemon:
log.info("web daemonizing host=%s port=%d api_port=%d", host, web_port, api_port)
_daemonize()
_api_port = api_port
class SPAHTTPRequestHandler(http.server.SimpleHTTPRequestHandler): class SPAHTTPRequestHandler(http.server.SimpleHTTPRequestHandler):
def do_GET(self): def do_GET(self):
if self.path.startswith("/api/"):
self._proxy("GET")
return
path = self.translate_path(self.path) path = self.translate_path(self.path)
if not Path(path).exists() or Path(path).is_dir(): if not Path(path).exists() or Path(path).is_dir():
self.path = "/index.html" self.path = "/index.html"
return super().do_GET() return super().do_GET()
def do_POST(self):
if self.path.startswith("/api/"):
self._proxy("POST")
return
self.send_error(405)
def do_PUT(self):
if self.path.startswith("/api/"):
self._proxy("PUT")
return
self.send_error(405)
def do_DELETE(self):
if self.path.startswith("/api/"):
self._proxy("DELETE")
return
self.send_error(405)
def _proxy(self, method: str) -> None:
content_length = int(self.headers.get("Content-Length", 0))
body = self.rfile.read(content_length) if content_length else None
forward = {k: v for k, v in self.headers.items()
if k.lower() not in ("host", "connection")}
try:
conn = http.client.HTTPConnection("127.0.0.1", _api_port, timeout=120)
conn.request(method, self.path, body=body, headers=forward)
resp = conn.getresponse()
self.send_response(resp.status)
for key, val in resp.getheaders():
if key.lower() not in ("connection", "transfer-encoding"):
self.send_header(key, val)
self.end_headers()
# Disable socket timeout for SSE streams — they are
# long-lived by design and the 120s timeout would kill them.
content_type = resp.getheader("Content-Type", "")
if "text/event-stream" in content_type:
conn.sock.settimeout(None)
# read1() returns bytes immediately available in the buffer
# without blocking for more. Plain read(4096) waits until
# 4096 bytes accumulate — fatal for SSE where each event
# is only ~100-500 bytes.
_read = getattr(resp, "read1", resp.read)
while True:
chunk = _read(4096)
if not chunk:
break
self.wfile.write(chunk)
self.wfile.flush()
except Exception as exc:
log.warning("web proxy error %s %s: %s", method, self.path, exc)
self.send_error(502, f"API proxy error: {exc}")
finally:
try:
conn.close()
except Exception: # nosec B110 — best-effort conn cleanup
pass
def log_message(self, fmt: str, *args: object) -> None:
log.debug("web %s", fmt % args)
import os import os
os.chdir(dist_dir) os.chdir(dist_dir)
socketserver.TCPServer.allow_reuse_address = True with socketserver.TCPServer((host, web_port), SPAHTTPRequestHandler) as httpd:
with socketserver.ThreadingTCPServer((host, web_port), SPAHTTPRequestHandler) as httpd:
console.print(f"[green]Serving DECNET Web Dashboard on http://{host}:{web_port}[/]") console.print(f"[green]Serving DECNET Web Dashboard on http://{host}:{web_port}[/]")
console.print(f"[dim]Proxying /api/* → http://127.0.0.1:{_api_port}[/]")
try: try:
httpd.serve_forever() httpd.serve_forever()
except KeyboardInterrupt: except KeyboardInterrupt:
console.print("\n[dim]Shutting down dashboard server.[/]") console.print("\n[dim]Shutting down dashboard server.[/]")
@app.command(name="profiler")
def profiler_cmd(
interval: int = typer.Option(30, "--interval", "-i", help="Seconds between profile rebuild cycles"),
daemon: bool = typer.Option(False, "--daemon", "-d", help="Detach to background as a daemon process"),
) -> None:
"""Run the attacker profiler as a standalone microservice."""
import asyncio
from decnet.profiler import attacker_profile_worker
from decnet.web.dependencies import repo
if daemon:
log.info("profiler daemonizing interval=%d", interval)
_daemonize()
log.info("profiler starting interval=%d", interval)
console.print(f"[bold cyan]Profiler starting[/] (interval: {interval}s)")
async def _run() -> None:
await repo.initialize()
await attacker_profile_worker(repo, interval=interval)
try:
asyncio.run(_run())
except KeyboardInterrupt:
console.print("\n[yellow]Profiler stopped.[/]")
@app.command(name="sniffer")
def sniffer_cmd(
log_file: str = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", "-f", help="Path to write captured syslog + JSON records"),
daemon: bool = typer.Option(False, "--daemon", "-d", help="Detach to background as a daemon process"),
) -> None:
"""Run the network sniffer as a standalone microservice."""
import asyncio
from decnet.sniffer import sniffer_worker
if daemon:
log.info("sniffer daemonizing log_file=%s", log_file)
_daemonize()
log.info("sniffer starting log_file=%s", log_file)
console.print(f"[bold cyan]Sniffer starting[/] → {log_file}")
try:
asyncio.run(sniffer_worker(log_file))
except KeyboardInterrupt:
console.print("\n[yellow]Sniffer stopped.[/]")
_DB_RESET_TABLES: tuple[str, ...] = (
# Order matters for DROP TABLE: attacker_behavior FK-references attackers.
"attacker_behavior",
"attackers",
"logs",
"bounty",
"state",
"users",
)
async def _db_reset_mysql_async(dsn: str, mode: str, confirm: bool) -> None:
"""Inspect + (optionally) wipe a MySQL database. Pulled out of the CLI
wrapper so tests can drive it without spawning a Typer runner."""
from urllib.parse import urlparse
from sqlalchemy import text
from sqlalchemy.ext.asyncio import create_async_engine
db_name = urlparse(dsn).path.lstrip("/") or "(default)"
engine = create_async_engine(dsn)
try:
# Collect current row counts per table. Missing tables yield -1.
rows: dict[str, int] = {}
async with engine.connect() as conn:
for tbl in _DB_RESET_TABLES:
try:
result = await conn.execute(text(f"SELECT COUNT(*) FROM `{tbl}`")) # nosec B608
rows[tbl] = result.scalar() or 0
except Exception: # noqa: BLE001 — ProgrammingError for missing table varies by driver
rows[tbl] = -1
summary = Table(title=f"DECNET MySQL reset — database `{db_name}` (mode={mode})")
summary.add_column("Table", style="cyan")
summary.add_column("Rows", justify="right")
for tbl, count in rows.items():
summary.add_row(tbl, "[dim]missing[/]" if count < 0 else f"{count:,}")
console.print(summary)
if not confirm:
console.print(
"[yellow]Dry-run only. Re-run with [bold]--i-know-what-im-doing[/] "
"to actually execute.[/]"
)
return
# Destructive phase. FK checks off so TRUNCATE/DROP works in any order.
async with engine.begin() as conn:
await conn.execute(text("SET FOREIGN_KEY_CHECKS = 0"))
for tbl in _DB_RESET_TABLES:
if rows.get(tbl, -1) < 0:
continue # skip absent tables silently
if mode == "truncate":
await conn.execute(text(f"TRUNCATE TABLE `{tbl}`"))
console.print(f"[green]✓ TRUNCATE {tbl}[/]")
else: # drop-tables
await conn.execute(text(f"DROP TABLE `{tbl}`"))
console.print(f"[green]✓ DROP TABLE {tbl}[/]")
await conn.execute(text("SET FOREIGN_KEY_CHECKS = 1"))
console.print(f"[bold green]Done. Database `{db_name}` reset ({mode}).[/]")
finally:
await engine.dispose()
@app.command(name="db-reset")
def db_reset(
i_know: bool = typer.Option(
False,
"--i-know-what-im-doing",
help="Required to actually execute. Without it, the command runs in dry-run mode.",
),
mode: str = typer.Option(
"truncate",
"--mode",
help="truncate (wipe rows, keep schema) | drop-tables (DROP TABLE for each DECNET table)",
),
url: Optional[str] = typer.Option(
None,
"--url",
help="Override DECNET_DB_URL for this invocation (e.g. when cleanup needs admin creds).",
),
) -> None:
"""Wipe the MySQL database used by the DECNET dashboard.
Destructive. Runs dry by default — pass --i-know-what-im-doing to commit.
Only supported against MySQL; refuses to operate on SQLite.
"""
import asyncio
import os
if mode not in ("truncate", "drop-tables"):
console.print(f"[red]Invalid --mode '{mode}'. Expected: truncate | drop-tables.[/]")
raise typer.Exit(2)
db_type = os.environ.get("DECNET_DB_TYPE", "sqlite").lower()
if db_type != "mysql":
console.print(
f"[red]db-reset is MySQL-only (DECNET_DB_TYPE='{db_type}'). "
f"For SQLite, just delete the decnet.db file.[/]"
)
raise typer.Exit(2)
dsn = url or os.environ.get("DECNET_DB_URL")
if not dsn:
# Fall back to component env vars (DECNET_DB_HOST/PORT/NAME/USER/PASSWORD).
from decnet.web.db.mysql.database import build_mysql_url
try:
dsn = build_mysql_url()
except ValueError as e:
console.print(f"[red]{e}[/]")
raise typer.Exit(2) from e
log.info("db-reset invoked mode=%s confirm=%s", mode, i_know)
try:
asyncio.run(_db_reset_mysql_async(dsn, mode=mode, confirm=i_know))
except Exception as e: # noqa: BLE001
console.print(f"[red]db-reset failed: {e}[/]")
raise typer.Exit(1) from e
if __name__ == '__main__': # pragma: no cover if __name__ == '__main__': # pragma: no cover
app() app()

View File

@@ -8,100 +8,13 @@ The ingester tails the .json file; rsyslog can consume the .log file independent
import asyncio import asyncio
import json import json
import os import logging
import re import re
import threading
import time
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime from datetime import datetime
from pathlib import Path from pathlib import Path
from typing import Any, Optional from typing import Any, Optional
from decnet.logging import get_logger logger = logging.getLogger("decnet.collector")
from decnet.telemetry import traced as _traced, get_tracer as _get_tracer, inject_context as _inject_ctx
logger = get_logger("collector")
# ─── Ingestion rate limiter ───────────────────────────────────────────────────
#
# Rationale: connection-lifecycle events (connect/disconnect/accept/close) are
# emitted once per TCP connection. During a portscan or credential-stuffing
# run, a single attacker can generate hundreds of these per second from the
# honeypot services themselves — each becoming a tiny WAL-write transaction
# through the ingester, starving reads until the queue drains.
#
# The collector still writes every line to the raw .log file (forensic record
# for rsyslog/SIEM). Only the .json path — which feeds SQLite — is deduped.
#
# Dedup key: (attacker_ip, decky, service, event_type)
# Window: DECNET_COLLECTOR_RL_WINDOW_SEC seconds (default 1.0)
# Scope: DECNET_COLLECTOR_RL_EVENT_TYPES comma list
# (default: connect,disconnect,connection,accept,close)
# Events outside that set bypass the limiter untouched.
def _parse_float_env(name: str, default: float) -> float:
raw = os.environ.get(name)
if raw is None:
return default
try:
value = float(raw)
except ValueError:
logger.warning("collector: invalid %s=%r, using default %s", name, raw, default)
return default
return max(0.0, value)
_RL_WINDOW_SEC: float = _parse_float_env("DECNET_COLLECTOR_RL_WINDOW_SEC", 1.0)
_RL_EVENT_TYPES: frozenset[str] = frozenset(
t.strip()
for t in os.environ.get(
"DECNET_COLLECTOR_RL_EVENT_TYPES",
"connect,disconnect,connection,accept,close",
).split(",")
if t.strip()
)
_RL_MAX_ENTRIES: int = 10_000
_rl_lock: threading.Lock = threading.Lock()
_rl_last: dict[tuple[str, str, str, str], float] = {}
def _should_ingest(parsed: dict[str, Any]) -> bool:
"""
Return True if this parsed event should be written to the JSON ingestion
stream. Rate-limited connection-lifecycle events return False when another
event with the same (attacker_ip, decky, service, event_type) was emitted
inside the dedup window.
"""
event_type = parsed.get("event_type", "")
if _RL_WINDOW_SEC <= 0.0 or event_type not in _RL_EVENT_TYPES:
return True
key = (
parsed.get("attacker_ip", "Unknown"),
parsed.get("decky", ""),
parsed.get("service", ""),
event_type,
)
now = time.monotonic()
with _rl_lock:
last = _rl_last.get(key, 0.0)
if now - last < _RL_WINDOW_SEC:
return False
_rl_last[key] = now
# Opportunistic GC: when the map grows past the cap, drop entries older
# than 60 windows (well outside any realistic in-flight dedup range).
if len(_rl_last) > _RL_MAX_ENTRIES:
cutoff = now - (_RL_WINDOW_SEC * 60.0)
stale = [k for k, t in _rl_last.items() if t < cutoff]
for k in stale:
del _rl_last[k]
return True
def _reset_rate_limiter() -> None:
"""Test-only helper — clear dedup state between test cases."""
with _rl_lock:
_rl_last.clear()
# ─── RFC 5424 parser ────────────────────────────────────────────────────────── # ─── RFC 5424 parser ──────────────────────────────────────────────────────────
@@ -116,7 +29,7 @@ _RFC5424_RE = re.compile(
) )
_SD_BLOCK_RE = re.compile(r'\[decnet@55555\s+(.*?)\]', re.DOTALL) _SD_BLOCK_RE = re.compile(r'\[decnet@55555\s+(.*?)\]', re.DOTALL)
_PARAM_RE = re.compile(r'(\w+)="((?:[^"\\]|\\.)*)"') _PARAM_RE = re.compile(r'(\w+)="((?:[^"\\]|\\.)*)"')
_IP_FIELDS = ("src_ip", "src", "client_ip", "remote_ip", "remote_addr", "target_ip", "ip") _IP_FIELDS = ("src_ip", "src", "client_ip", "remote_ip", "ip")
def parse_rfc5424(line: str) -> Optional[dict[str, Any]]: def parse_rfc5424(line: str) -> Optional[dict[str, Any]]:
@@ -202,78 +115,34 @@ def is_service_event(attrs: dict) -> bool:
# ─── Blocking stream worker (runs in a thread) ──────────────────────────────── # ─── Blocking stream worker (runs in a thread) ────────────────────────────────
def _reopen_if_needed(path: Path, fh: Optional[Any]) -> Any:
"""Return fh if it still points to the same inode as path; otherwise close
fh and open a fresh handle. Handles the file being deleted (manual rm) or
rotated (logrotate rename + create)."""
try:
if fh is not None and os.fstat(fh.fileno()).st_ino == os.stat(path).st_ino:
return fh
except OSError:
pass
# File gone or inode changed — close stale handle and open a new one.
if fh is not None:
try:
fh.close()
except Exception: # nosec B110 — best-effort file handle cleanup
pass
path.parent.mkdir(parents=True, exist_ok=True)
return open(path, "a", encoding="utf-8")
@_traced("collector.stream_container")
def _stream_container(container_id: str, log_path: Path, json_path: Path) -> None: def _stream_container(container_id: str, log_path: Path, json_path: Path) -> None:
"""Stream logs from one container and append to the host log files.""" """Stream logs from one container and append to the host log files."""
import docker # type: ignore[import] import docker # type: ignore[import]
lf: Optional[Any] = None
jf: Optional[Any] = None
try: try:
client = docker.from_env() client = docker.from_env()
container = client.containers.get(container_id) container = client.containers.get(container_id)
log_stream = container.logs(stream=True, follow=True, stdout=True, stderr=False) log_stream = container.logs(stream=True, follow=True, stdout=True, stderr=False)
buf = "" buf = ""
for chunk in log_stream: with (
buf += chunk.decode("utf-8", errors="replace") open(log_path, "a", encoding="utf-8") as lf,
while "\n" in buf: open(json_path, "a", encoding="utf-8") as jf,
line, buf = buf.split("\n", 1) ):
line = line.rstrip() for chunk in log_stream:
if not line: buf += chunk.decode("utf-8", errors="replace")
continue while "\n" in buf:
lf = _reopen_if_needed(log_path, lf) line, buf = buf.split("\n", 1)
lf.write(line + "\n") line = line.rstrip()
lf.flush() if not line:
parsed = parse_rfc5424(line) continue
if parsed: lf.write(line + "\n")
if _should_ingest(parsed): lf.flush()
_tracer = _get_tracer("collector") parsed = parse_rfc5424(line)
with _tracer.start_as_current_span("collector.event") as _span: if parsed:
_span.set_attribute("decky", parsed.get("decky", "")) jf.write(json.dumps(parsed) + "\n")
_span.set_attribute("service", parsed.get("service", "")) jf.flush()
_span.set_attribute("event_type", parsed.get("event_type", ""))
_span.set_attribute("attacker_ip", parsed.get("attacker_ip", ""))
_inject_ctx(parsed)
logger.debug("collector: event written decky=%s type=%s", parsed.get("decky"), parsed.get("event_type"))
jf = _reopen_if_needed(json_path, jf)
jf.write(json.dumps(parsed) + "\n")
jf.flush()
else:
logger.debug(
"collector: rate-limited decky=%s service=%s type=%s attacker=%s",
parsed.get("decky"), parsed.get("service"),
parsed.get("event_type"), parsed.get("attacker_ip"),
)
else:
logger.debug("collector: malformed RFC5424 line snippet=%r", line[:80])
except Exception as exc: except Exception as exc:
logger.debug("collector: log stream ended container_id=%s reason=%s", container_id, exc) logger.debug("Log stream ended for container %s: %s", container_id, exc)
finally:
for fh in (lf, jf):
if fh is not None:
try:
fh.close()
except Exception: # nosec B110 — best-effort file handle cleanup
pass
# ─── Async collector ────────────────────────────────────────────────────────── # ─── Async collector ──────────────────────────────────────────────────────────
@@ -295,26 +164,15 @@ async def log_collector_worker(log_file: str) -> None:
active: dict[str, asyncio.Task[None]] = {} active: dict[str, asyncio.Task[None]] = {}
loop = asyncio.get_running_loop() loop = asyncio.get_running_loop()
# Dedicated thread pool so long-running container log streams don't
# saturate the default asyncio executor and starve short-lived
# to_thread() calls elsewhere (e.g. load_state in the web API).
collector_pool = ThreadPoolExecutor(
max_workers=64, thread_name_prefix="decnet-collector",
)
def _spawn(container_id: str, container_name: str) -> None: def _spawn(container_id: str, container_name: str) -> None:
if container_id not in active or active[container_id].done(): if container_id not in active or active[container_id].done():
active[container_id] = asyncio.ensure_future( active[container_id] = asyncio.ensure_future(
loop.run_in_executor( asyncio.to_thread(_stream_container, container_id, log_path, json_path),
collector_pool, _stream_container,
container_id, log_path, json_path,
),
loop=loop, loop=loop,
) )
logger.info("collector: streaming container=%s", container_name) logger.info("Collecting logs from container: %s", container_name)
try: try:
logger.info("collector started log_path=%s", log_path)
client = docker.from_env() client = docker.from_env()
for container in client.containers.list(): for container in client.containers.list():
@@ -332,15 +190,11 @@ async def log_collector_worker(log_file: str) -> None:
if cid and is_service_event(attrs): if cid and is_service_event(attrs):
loop.call_soon_threadsafe(_spawn, cid, name) loop.call_soon_threadsafe(_spawn, cid, name)
await loop.run_in_executor(collector_pool, _watch_events) await asyncio.to_thread(_watch_events)
except asyncio.CancelledError: except asyncio.CancelledError:
logger.info("collector shutdown requested cancelling %d tasks", len(active))
for task in active.values(): for task in active.values():
task.cancel() task.cancel()
collector_pool.shutdown(wait=False)
raise raise
except Exception as exc: except Exception as exc:
logger.error("collector error: %s", exc) logger.error("Collector error: %s", exc)
finally:
collector_pool.shutdown(wait=False)

View File

@@ -64,8 +64,6 @@ def generate_compose(config: DecnetConfig) -> dict:
# --- Service containers: share base network namespace --- # --- Service containers: share base network namespace ---
for svc_name in decky.services: for svc_name in decky.services:
svc = get_service(svc_name) svc = get_service(svc_name)
if svc.fleet_singleton:
continue
svc_cfg = decky.service_config.get(svc_name, {}) svc_cfg = decky.service_config.get(svc_name, {})
fragment = svc.compose_fragment(decky.name, service_cfg=svc_cfg) fragment = svc.compose_fragment(decky.name, service_cfg=svc_cfg)

View File

@@ -48,49 +48,23 @@ class Rfc5424Formatter(logging.Formatter):
msg = record.getMessage() msg = record.getMessage()
if record.exc_info: if record.exc_info:
msg += "\n" + self.formatException(record.exc_info) msg += "\n" + self.formatException(record.exc_info)
app = getattr(record, "decnet_component", self._app)
return ( return (
f"<{prival}>1 {ts} {self._hostname} {app}" f"<{prival}>1 {ts} {self._hostname} {self._app}"
f" {os.getpid()} {record.name} - {msg}" f" {os.getpid()} {record.name} - {msg}"
) )
def _configure_logging(dev: bool) -> None: def _configure_logging(dev: bool) -> None:
"""Install RFC 5424 handlers on the root logger (idempotent). """Install the RFC 5424 handler on the root logger (idempotent)."""
Always adds a StreamHandler (stderr). Also adds a RotatingFileHandler
writing to DECNET_SYSTEM_LOGS (default: decnet.system.log in $PWD) so
all microservice daemons — which redirect stderr to /dev/null — still
produce readable logs. File handler is skipped under pytest.
"""
import logging.handlers as _lh
root = logging.getLogger() root = logging.getLogger()
# Guard: if our StreamHandler is already installed, all handlers are set. # Avoid adding duplicate handlers on re-import (e.g. during testing)
if any(isinstance(h, logging.StreamHandler) and isinstance(h.formatter, Rfc5424Formatter) if any(isinstance(h, logging.StreamHandler) and isinstance(h.formatter, Rfc5424Formatter)
for h in root.handlers): for h in root.handlers):
return return
handler = logging.StreamHandler()
fmt = Rfc5424Formatter() handler.setFormatter(Rfc5424Formatter())
root.setLevel(logging.DEBUG if dev else logging.INFO) root.setLevel(logging.DEBUG if dev else logging.INFO)
root.addHandler(handler)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(fmt)
root.addHandler(stream_handler)
# Skip the file handler during pytest runs to avoid polluting the test cwd.
_in_pytest = any(k.startswith("PYTEST") for k in os.environ)
if not _in_pytest:
_log_path = os.environ.get("DECNET_SYSTEM_LOGS", "decnet.system.log")
file_handler = _lh.RotatingFileHandler(
_log_path,
mode="a",
maxBytes=10 * 1024 * 1024, # 10 MB
backupCount=5,
encoding="utf-8",
)
file_handler.setFormatter(fmt)
root.addHandler(file_handler)
_dev = os.environ.get("DECNET_DEVELOPER", "").lower() == "true" _dev = os.environ.get("DECNET_DEVELOPER", "").lower() == "true"

View File

@@ -33,7 +33,6 @@ from decnet.logging.syslog_formatter import (
SEVERITY_WARNING, SEVERITY_WARNING,
format_rfc5424, format_rfc5424,
) )
from decnet.telemetry import traced as _traced, get_tracer as _get_tracer
class CorrelationEngine: class CorrelationEngine:
@@ -65,7 +64,6 @@ class CorrelationEngine:
self.events_indexed += 1 self.events_indexed += 1
return event return event
@_traced("correlation.ingest_file")
def ingest_file(self, path: Path) -> int: def ingest_file(self, path: Path) -> int:
""" """
Parse every line of *path* and index it. Parse every line of *path* and index it.
@@ -75,18 +73,12 @@ class CorrelationEngine:
with open(path) as fh: with open(path) as fh:
for line in fh: for line in fh:
self.ingest(line) self.ingest(line)
_tracer = _get_tracer("correlation")
with _tracer.start_as_current_span("correlation.ingest_file.summary") as _span:
_span.set_attribute("lines_parsed", self.lines_parsed)
_span.set_attribute("events_indexed", self.events_indexed)
_span.set_attribute("unique_ips", len(self._events))
return self.events_indexed return self.events_indexed
# ------------------------------------------------------------------ # # ------------------------------------------------------------------ #
# Query # # Query #
# ------------------------------------------------------------------ # # ------------------------------------------------------------------ #
@_traced("correlation.traversals")
def traversals(self, min_deckies: int = 2) -> list[AttackerTraversal]: def traversals(self, min_deckies: int = 2) -> list[AttackerTraversal]:
""" """
Return all attackers that touched at least *min_deckies* distinct Return all attackers that touched at least *min_deckies* distinct
@@ -143,7 +135,6 @@ class CorrelationEngine:
) )
return table return table
@_traced("correlation.report_json")
def report_json(self, min_deckies: int = 2) -> dict: def report_json(self, min_deckies: int = 2) -> dict:
"""Serialisable dict representation of all traversals.""" """Serialisable dict representation of all traversals."""
return { return {
@@ -156,7 +147,6 @@ class CorrelationEngine:
"traversals": [t.to_dict() for t in self.traversals(min_deckies)], "traversals": [t.to_dict() for t in self.traversals(min_deckies)],
} }
@_traced("correlation.traversal_syslog_lines")
def traversal_syslog_lines(self, min_deckies: int = 2) -> list[str]: def traversal_syslog_lines(self, min_deckies: int = 2) -> list[str]:
""" """
Emit one RFC 5424 syslog line per detected traversal. Emit one RFC 5424 syslog line per detected traversal.

View File

@@ -38,7 +38,7 @@ _SD_BLOCK_RE = re.compile(r'\[decnet@55555\s+(.*?)\]', re.DOTALL)
_PARAM_RE = re.compile(r'(\w+)="((?:[^"\\]|\\.)*)"') _PARAM_RE = re.compile(r'(\w+)="((?:[^"\\]|\\.)*)"')
# Field names to probe for attacker IP, in priority order # Field names to probe for attacker IP, in priority order
_IP_FIELDS = ("src_ip", "src", "client_ip", "remote_ip", "remote_addr", "target_ip", "ip") _IP_FIELDS = ("src_ip", "src", "client_ip", "remote_ip", "ip")
@dataclass @dataclass

View File

@@ -11,8 +11,6 @@ import docker
from rich.console import Console from rich.console import Console
from rich.table import Table from rich.table import Table
from decnet.logging import get_logger
from decnet.telemetry import traced as _traced
from decnet.config import DecnetConfig, clear_state, load_state, save_state from decnet.config import DecnetConfig, clear_state, load_state, save_state
from decnet.composer import write_compose from decnet.composer import write_compose
from decnet.network import ( from decnet.network import (
@@ -28,7 +26,6 @@ from decnet.network import (
teardown_host_macvlan, teardown_host_macvlan,
) )
log = get_logger("engine")
console = Console() console = Console()
COMPOSE_FILE = Path("decnet-compose.yml") COMPOSE_FILE = Path("decnet-compose.yml")
_CANONICAL_LOGGING = Path(__file__).parent.parent.parent / "templates" / "decnet_logging.py" _CANONICAL_LOGGING = Path(__file__).parent.parent.parent / "templates" / "decnet_logging.py"
@@ -68,7 +65,6 @@ _PERMANENT_ERRORS = (
) )
@_traced("engine.compose_with_retry")
def _compose_with_retry( def _compose_with_retry(
*args: str, *args: str,
compose_file: Path = COMPOSE_FILE, compose_file: Path = COMPOSE_FILE,
@@ -109,16 +105,12 @@ def _compose_with_retry(
raise last_exc raise last_exc
@_traced("engine.deploy")
def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False, parallel: bool = False) -> None: def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False, parallel: bool = False) -> None:
log.info("deployment started n_deckies=%d interface=%s subnet=%s dry_run=%s", len(config.deckies), config.interface, config.subnet, dry_run)
log.debug("deploy: deckies=%s", [d.name for d in config.deckies])
client = docker.from_env() client = docker.from_env()
ip_list = [d.ip for d in config.deckies] ip_list = [d.ip for d in config.deckies]
decky_range = ips_to_range(ip_list) decky_range = ips_to_range(ip_list)
host_ip = get_host_ip(config.interface) host_ip = get_host_ip(config.interface)
log.debug("deploy: ip_range=%s host_ip=%s", decky_range, host_ip)
net_driver = "IPvlan L2" if config.ipvlan else "MACVLAN" net_driver = "IPvlan L2" if config.ipvlan else "MACVLAN"
console.print(f"[bold cyan]Creating {net_driver} network[/] ({MACVLAN_NETWORK_NAME}) on {config.interface}") console.print(f"[bold cyan]Creating {net_driver} network[/] ({MACVLAN_NETWORK_NAME}) on {config.interface}")
@@ -148,7 +140,6 @@ def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False,
console.print(f"[bold cyan]Compose file written[/] → {compose_path}") console.print(f"[bold cyan]Compose file written[/] → {compose_path}")
if dry_run: if dry_run:
log.info("deployment dry-run complete compose_path=%s", compose_path)
console.print("[yellow]Dry run — no containers started.[/]") console.print("[yellow]Dry run — no containers started.[/]")
return return
@@ -170,16 +161,12 @@ def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False,
_compose_with_retry("build", "--no-cache", compose_file=compose_path) _compose_with_retry("build", "--no-cache", compose_file=compose_path)
_compose_with_retry("up", "--build", "-d", compose_file=compose_path) _compose_with_retry("up", "--build", "-d", compose_file=compose_path)
log.info("deployment complete n_deckies=%d", len(config.deckies))
_print_status(config) _print_status(config)
@_traced("engine.teardown")
def teardown(decky_id: str | None = None) -> None: def teardown(decky_id: str | None = None) -> None:
log.info("teardown requested decky_id=%s", decky_id or "all")
state = load_state() state = load_state()
if state is None: if state is None:
log.warning("teardown: no active deployment found")
console.print("[red]No active deployment found (no decnet-state.json).[/]") console.print("[red]No active deployment found (no decnet-state.json).[/]")
return return
@@ -206,7 +193,6 @@ def teardown(decky_id: str | None = None) -> None:
clear_state() clear_state()
net_driver = "IPvlan" if config.ipvlan else "MACVLAN" net_driver = "IPvlan" if config.ipvlan else "MACVLAN"
log.info("teardown complete all deckies removed network_driver=%s", net_driver)
console.print(f"[green]All deckies torn down. {net_driver} network removed.[/]") console.print(f"[green]All deckies torn down. {net_driver} network removed.[/]")

View File

@@ -40,58 +40,30 @@ def _require_env(name: str) -> str:
f"Environment variable '{name}' is set to an insecure default ('{value}'). " f"Environment variable '{name}' is set to an insecure default ('{value}'). "
f"Choose a strong, unique value before starting DECNET." f"Choose a strong, unique value before starting DECNET."
) )
if name == "DECNET_JWT_SECRET" and len(value) < 32:
_developer = os.environ.get("DECNET_DEVELOPER", "False").lower() == "true"
if not _developer:
raise ValueError(
f"DECNET_JWT_SECRET is too short ({len(value)} bytes). "
f"Use at least 32 characters to satisfy HS256 requirements (RFC 7518 §3.2)."
)
return value return value
# System logging — all microservice daemons append here.
DECNET_SYSTEM_LOGS: str = os.environ.get("DECNET_SYSTEM_LOGS", "decnet.system.log")
# Set to "true" to embed the profiler inside the API process.
# Leave unset (default) when the standalone `decnet profiler --daemon` is
# running — embedding both produces two workers sharing the same DB cursor,
# which causes events to be skipped or processed twice.
DECNET_EMBED_PROFILER: bool = os.environ.get("DECNET_EMBED_PROFILER", "").lower() == "true"
# API Options # API Options
DECNET_API_HOST: str = os.environ.get("DECNET_API_HOST", "127.0.0.1") DECNET_API_HOST: str = os.environ.get("DECNET_API_HOST", "0.0.0.0") # nosec B104
DECNET_API_PORT: int = _port("DECNET_API_PORT", 8000) DECNET_API_PORT: int = _port("DECNET_API_PORT", 8000)
DECNET_JWT_SECRET: str = _require_env("DECNET_JWT_SECRET") DECNET_JWT_SECRET: str = _require_env("DECNET_JWT_SECRET")
DECNET_INGEST_LOG_FILE: str | None = os.environ.get("DECNET_INGEST_LOG_FILE", "/var/log/decnet/decnet.log") DECNET_INGEST_LOG_FILE: str | None = os.environ.get("DECNET_INGEST_LOG_FILE", "/var/log/decnet/decnet.log")
# Web Dashboard Options # Web Dashboard Options
DECNET_WEB_HOST: str = os.environ.get("DECNET_WEB_HOST", "127.0.0.1") DECNET_WEB_HOST: str = os.environ.get("DECNET_WEB_HOST", "0.0.0.0") # nosec B104
DECNET_WEB_PORT: int = _port("DECNET_WEB_PORT", 8080) DECNET_WEB_PORT: int = _port("DECNET_WEB_PORT", 8080)
DECNET_ADMIN_USER: str = os.environ.get("DECNET_ADMIN_USER", "admin") DECNET_ADMIN_USER: str = os.environ.get("DECNET_ADMIN_USER", "admin")
DECNET_ADMIN_PASSWORD: str = os.environ.get("DECNET_ADMIN_PASSWORD", "admin") DECNET_ADMIN_PASSWORD: str = os.environ.get("DECNET_ADMIN_PASSWORD", "admin")
DECNET_DEVELOPER: bool = os.environ.get("DECNET_DEVELOPER", "False").lower() == "true" DECNET_DEVELOPER: bool = os.environ.get("DECNET_DEVELOPER", "False").lower() == "true"
# Tracing — set to "true" to enable OpenTelemetry distributed tracing.
# Separate from DECNET_DEVELOPER so tracing can be toggled independently.
DECNET_DEVELOPER_TRACING: bool = os.environ.get("DECNET_DEVELOPER_TRACING", "").lower() == "true"
DECNET_OTEL_ENDPOINT: str = os.environ.get("DECNET_OTEL_ENDPOINT", "http://localhost:4317")
# Database Options # Database Options
DECNET_DB_TYPE: str = os.environ.get("DECNET_DB_TYPE", "sqlite").lower() DECNET_DB_TYPE: str = os.environ.get("DECNET_DB_TYPE", "sqlite").lower()
DECNET_DB_URL: Optional[str] = os.environ.get("DECNET_DB_URL") DECNET_DB_URL: Optional[str] = os.environ.get("DECNET_DB_URL")
# MySQL component vars (used only when DECNET_DB_URL is not set)
DECNET_DB_HOST: str = os.environ.get("DECNET_DB_HOST", "localhost")
DECNET_DB_PORT: int = _port("DECNET_DB_PORT", 3306) if os.environ.get("DECNET_DB_PORT") else 3306
DECNET_DB_NAME: str = os.environ.get("DECNET_DB_NAME", "decnet")
DECNET_DB_USER: str = os.environ.get("DECNET_DB_USER", "decnet")
DECNET_DB_PASSWORD: Optional[str] = os.environ.get("DECNET_DB_PASSWORD")
# CORS — comma-separated list of allowed origins for the web dashboard API. # CORS — comma-separated list of allowed origins for the web dashboard API.
# Defaults to the configured web host/port. Override with DECNET_CORS_ORIGINS if needed. # Defaults to the configured web host/port. Override with DECNET_CORS_ORIGINS if needed.
# Example: DECNET_CORS_ORIGINS=http://192.168.1.50:9090,https://dashboard.example.com # Example: DECNET_CORS_ORIGINS=http://192.168.1.50:9090,https://dashboard.example.com
_WILDCARD_ADDRS = {"0.0.0.0", "127.0.0.1", "::"} # nosec B104 — comparison only, not a bind _web_hostname: str = "localhost" if DECNET_WEB_HOST in ("0.0.0.0", "127.0.0.1", "::") else DECNET_WEB_HOST # nosec B104
_web_hostname: str = "localhost" if DECNET_WEB_HOST in _WILDCARD_ADDRS else DECNET_WEB_HOST
_cors_default: str = f"http://{_web_hostname}:{DECNET_WEB_PORT}" _cors_default: str = f"http://{_web_hostname}:{DECNET_WEB_PORT}"
_cors_raw: str = os.environ.get("DECNET_CORS_ORIGINS", _cors_default) _cors_raw: str = os.environ.get("DECNET_CORS_ORIGINS", _cors_default)
DECNET_CORS_ORIGINS: list[str] = [o.strip() for o in _cors_raw.split(",") if o.strip()] DECNET_CORS_ORIGINS: list[str] = [o.strip() for o in _cors_raw.split(",") if o.strip()]

View File

@@ -17,11 +17,8 @@ from decnet.services.registry import all_services
def all_service_names() -> list[str]: def all_service_names() -> list[str]:
"""Return all registered per-decky service names (excludes fleet singletons).""" """Return all registered service names from the live plugin registry."""
return sorted( return sorted(all_services().keys())
name for name, svc in all_services().items()
if not svc.fleet_singleton
)
def resolve_distros( def resolve_distros(

View File

@@ -1,92 +0,0 @@
"""
DECNET application logging helpers.
Usage:
from decnet.logging import get_logger
log = get_logger("engine") # APP-NAME in RFC 5424 output becomes "engine"
The returned logger propagates to the root logger (configured in config.py with
Rfc5424Formatter), so level control via DECNET_DEVELOPER still applies globally.
When ``DECNET_DEVELOPER_TRACING`` is active, every LogRecord is enriched with
``otel_trace_id`` and ``otel_span_id`` from the current OTEL span context.
This lets you correlate log lines with Jaeger traces — click a log entry and
jump straight to the span that produced it.
"""
from __future__ import annotations
import logging
class _ComponentFilter(logging.Filter):
"""Injects *decnet_component* onto every LogRecord so Rfc5424Formatter can
use it as the RFC 5424 APP-NAME field instead of the hardcoded "decnet"."""
def __init__(self, component: str) -> None:
super().__init__()
self.component = component
def filter(self, record: logging.LogRecord) -> bool:
record.decnet_component = self.component # type: ignore[attr-defined]
return True
class _TraceContextFilter(logging.Filter):
"""Injects ``otel_trace_id`` and ``otel_span_id`` onto every LogRecord
from the active OTEL span context.
Installed once by ``enable_trace_context()`` on the root ``decnet`` logger
so all child loggers inherit the enrichment via propagation.
When no span is active, both fields are set to ``"0"`` (cheap string
comparison downstream, no None-checks needed).
"""
def filter(self, record: logging.LogRecord) -> bool:
try:
from opentelemetry import trace
span = trace.get_current_span()
ctx = span.get_span_context()
if ctx and ctx.trace_id:
record.otel_trace_id = format(ctx.trace_id, "032x") # type: ignore[attr-defined]
record.otel_span_id = format(ctx.span_id, "016x") # type: ignore[attr-defined]
else:
record.otel_trace_id = "0" # type: ignore[attr-defined]
record.otel_span_id = "0" # type: ignore[attr-defined]
except Exception:
record.otel_trace_id = "0" # type: ignore[attr-defined]
record.otel_span_id = "0" # type: ignore[attr-defined]
return True
_trace_filter_installed: bool = False
def enable_trace_context() -> None:
"""Install the OTEL trace-context filter on the root ``decnet`` logger.
Called once from ``decnet.telemetry.setup_tracing()`` after the
TracerProvider is initialised. Safe to call multiple times (idempotent).
"""
global _trace_filter_installed
if _trace_filter_installed:
return
root = logging.getLogger("decnet")
root.addFilter(_TraceContextFilter())
_trace_filter_installed = True
def get_logger(component: str) -> logging.Logger:
"""Return a named logger that self-identifies as *component* in RFC 5424.
Valid components: cli, engine, api, mutator, collector.
The logger is named ``decnet.<component>`` and propagates normally, so the
root handler (Rfc5424Formatter + level gate from DECNET_DEVELOPER) handles
output. Calling this function multiple times for the same component is safe.
"""
logger = logging.getLogger(f"decnet.{component}")
if not any(isinstance(f, _ComponentFilter) for f in logger.filters):
logger.addFilter(_ComponentFilter(component))
return logger

View File

@@ -13,8 +13,6 @@ import logging.handlers
import os import os
from pathlib import Path from pathlib import Path
from decnet.telemetry import traced as _traced
_LOG_FILE_ENV = "DECNET_LOG_FILE" _LOG_FILE_ENV = "DECNET_LOG_FILE"
_DEFAULT_LOG_FILE = "/var/log/decnet/decnet.log" _DEFAULT_LOG_FILE = "/var/log/decnet/decnet.log"
_MAX_BYTES = 10 * 1024 * 1024 # 10 MB _MAX_BYTES = 10 * 1024 * 1024 # 10 MB
@@ -24,10 +22,10 @@ _handler: logging.handlers.RotatingFileHandler | None = None
_logger: logging.Logger | None = None _logger: logging.Logger | None = None
@_traced("logging.init_file_handler") def _get_logger() -> logging.Logger:
def _init_file_handler() -> logging.Logger:
"""One-time initialisation of the rotating file handler."""
global _handler, _logger global _handler, _logger
if _logger is not None:
return _logger
log_path = Path(os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE)) log_path = Path(os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE))
log_path.parent.mkdir(parents=True, exist_ok=True) log_path.parent.mkdir(parents=True, exist_ok=True)
@@ -48,12 +46,6 @@ def _init_file_handler() -> logging.Logger:
return _logger return _logger
def _get_logger() -> logging.Logger:
if _logger is not None:
return _logger
return _init_file_handler()
def write_syslog(line: str) -> None: def write_syslog(line: str) -> None:
"""Write a single RFC 5424 syslog line to the rotating log file.""" """Write a single RFC 5424 syslog line to the rotating log file."""
try: try:

View File

@@ -11,8 +11,6 @@ shared utilities for validating and parsing the log_target string.
import socket import socket
from decnet.telemetry import traced as _traced
def parse_log_target(log_target: str) -> tuple[str, int]: def parse_log_target(log_target: str) -> tuple[str, int]:
""" """
@@ -25,7 +23,6 @@ def parse_log_target(log_target: str) -> tuple[str, int]:
return parts[0], int(parts[1]) return parts[0], int(parts[1])
@_traced("logging.probe_log_target")
def probe_log_target(log_target: str, timeout: float = 2.0) -> bool: def probe_log_target(log_target: str, timeout: float = 2.0) -> bool:
""" """
Return True if the log target is reachable (TCP connect succeeds). Return True if the log target is reachable (TCP connect succeeds).

View File

@@ -14,28 +14,22 @@ from decnet.fleet import all_service_names
from decnet.composer import write_compose from decnet.composer import write_compose
from decnet.config import DeckyConfig, DecnetConfig from decnet.config import DeckyConfig, DecnetConfig
from decnet.engine import _compose_with_retry from decnet.engine import _compose_with_retry
from decnet.logging import get_logger
from decnet.telemetry import traced as _traced
from pathlib import Path from pathlib import Path
import anyio import anyio
import asyncio import asyncio
from decnet.web.db.repository import BaseRepository from decnet.web.db.repository import BaseRepository
log = get_logger("mutator")
console = Console() console = Console()
@_traced("mutator.mutate_decky")
async def mutate_decky(decky_name: str, repo: BaseRepository) -> bool: async def mutate_decky(decky_name: str, repo: BaseRepository) -> bool:
""" """
Perform an Intra-Archetype Shuffle for a specific decky. Perform an Intra-Archetype Shuffle for a specific decky.
Returns True if mutation succeeded, False otherwise. Returns True if mutation succeeded, False otherwise.
""" """
log.debug("mutate_decky: start decky=%s", decky_name)
state_dict = await repo.get_state("deployment") state_dict = await repo.get_state("deployment")
if state_dict is None: if state_dict is None:
log.error("mutate_decky: no active deployment found in database")
console.print("[red]No active deployment found in database.[/]") console.print("[red]No active deployment found in database.[/]")
return False return False
@@ -79,30 +73,25 @@ async def mutate_decky(decky_name: str, repo: BaseRepository) -> bool:
# Still writes files for Docker to use # Still writes files for Docker to use
write_compose(config, compose_path) write_compose(config, compose_path)
log.info("mutation applied decky=%s services=%s", decky_name, ",".join(decky.services))
console.print(f"[cyan]Mutating '{decky_name}' to services: {', '.join(decky.services)}[/]") console.print(f"[cyan]Mutating '{decky_name}' to services: {', '.join(decky.services)}[/]")
try: try:
# Wrap blocking call in thread # Wrap blocking call in thread
await anyio.to_thread.run_sync(_compose_with_retry, "up", "-d", "--remove-orphans", compose_path) await anyio.to_thread.run_sync(_compose_with_retry, "up", "-d", "--remove-orphans", compose_path)
except Exception as e: except Exception as e:
log.error("mutation failed decky=%s error=%s", decky_name, e)
console.print(f"[red]Failed to mutate '{decky_name}': {e}[/]") console.print(f"[red]Failed to mutate '{decky_name}': {e}[/]")
return False return False
return True return True
@_traced("mutator.mutate_all")
async def mutate_all(repo: BaseRepository, force: bool = False) -> None: async def mutate_all(repo: BaseRepository, force: bool = False) -> None:
""" """
Check all deckies and mutate those that are due. Check all deckies and mutate those that are due.
If force=True, mutates all deckies regardless of schedule. If force=True, mutates all deckies regardless of schedule.
""" """
log.debug("mutate_all: start force=%s", force)
state_dict = await repo.get_state("deployment") state_dict = await repo.get_state("deployment")
if state_dict is None: if state_dict is None:
log.error("mutate_all: no active deployment found")
console.print("[red]No active deployment found.[/]") console.print("[red]No active deployment found.[/]")
return return
@@ -127,21 +116,15 @@ async def mutate_all(repo: BaseRepository, force: bool = False) -> None:
mutated_count += 1 mutated_count += 1
if mutated_count == 0 and not force: if mutated_count == 0 and not force:
log.debug("mutate_all: no deckies due for mutation")
console.print("[dim]No deckies are due for mutation.[/]") console.print("[dim]No deckies are due for mutation.[/]")
else:
log.info("mutate_all: complete mutated_count=%d", mutated_count)
@_traced("mutator.watch_loop")
async def run_watch_loop(repo: BaseRepository, poll_interval_secs: int = 10) -> None: async def run_watch_loop(repo: BaseRepository, poll_interval_secs: int = 10) -> None:
"""Run an infinite loop checking for deckies that need mutation.""" """Run an infinite loop checking for deckies that need mutation."""
log.info("mutator watch loop started poll_interval_secs=%d", poll_interval_secs)
console.print(f"[green]DECNET Mutator Watcher started (polling every {poll_interval_secs}s).[/]") console.print(f"[green]DECNET Mutator Watcher started (polling every {poll_interval_secs}s).[/]")
try: try:
while True: while True:
await mutate_all(force=False, repo=repo) await mutate_all(force=False, repo=repo)
await asyncio.sleep(poll_interval_secs) await asyncio.sleep(poll_interval_secs)
except KeyboardInterrupt: except KeyboardInterrupt:
log.info("mutator watch loop stopped")
console.print("\n[dim]Mutator watcher stopped.[/]") console.print("\n[dim]Mutator watcher stopped.[/]")

View File

@@ -1,13 +0,0 @@
"""
DECNET-PROBER — standalone active network probing service.
Runs as a detached host-level process (no container). Sends crafted TLS
probes to discover C2 frameworks and other attacker infrastructure via
JARM fingerprinting. Results are written as RFC 5424 syslog + JSON to the
same log file the collector uses, so the existing ingestion pipeline picks
them up automatically.
"""
from decnet.prober.worker import prober_worker
__all__ = ["prober_worker"]

View File

@@ -1,252 +0,0 @@
"""
HASSHServer — SSH server fingerprinting via KEX_INIT algorithm ordering.
Connects to an SSH server, completes the version exchange, captures the
server's SSH_MSG_KEXINIT message, and hashes the server-to-client algorithm
fields (kex, encryption, MAC, compression) into a 32-character MD5 digest.
This is the *server* variant of HASSH (HASSHServer). It fingerprints what
the server *offers*, which identifies the SSH implementation (OpenSSH,
Paramiko, libssh, Cobalt Strike SSH, etc.).
Stdlib only (socket, struct, hashlib) plus decnet.telemetry for tracing (zero-cost when disabled).
"""
from __future__ import annotations
import hashlib
import socket
import struct
from typing import Any
from decnet.telemetry import traced as _traced
# SSH protocol constants
_SSH_MSG_KEXINIT = 20
_KEX_INIT_COOKIE_LEN = 16
_KEX_INIT_NAME_LISTS = 10 # 10 name-list fields in KEX_INIT
# Blend in as a normal OpenSSH client
_CLIENT_BANNER = b"SSH-2.0-OpenSSH_9.6\r\n"
# Max bytes to read for server banner
_MAX_BANNER_LEN = 256
# Max bytes for a single SSH packet (KEX_INIT is typically < 2KB)
_MAX_PACKET_LEN = 35000
# ─── SSH connection + KEX_INIT capture ──────────────────────────────────────
@_traced("prober.hassh_ssh_connect")
def _ssh_connect(
host: str,
port: int,
timeout: float,
) -> tuple[str, bytes] | None:
"""
TCP connect, exchange version strings, read server's KEX_INIT.
Returns (server_banner, kex_init_payload) or None on failure.
The kex_init_payload starts at the SSH_MSG_KEXINIT type byte.
"""
sock = None
try:
sock = socket.create_connection((host, port), timeout=timeout)
sock.settimeout(timeout)
# 1. Read server banner (line ending \r\n or \n)
banner = _read_banner(sock)
if banner is None or not banner.startswith("SSH-"):
return None
# 2. Send our client version string
sock.sendall(_CLIENT_BANNER)
# 3. Read the server's first binary packet (should be KEX_INIT)
payload = _read_ssh_packet(sock)
if payload is None or len(payload) < 1:
return None
if payload[0] != _SSH_MSG_KEXINIT:
return None
return (banner, payload)
except (OSError, socket.timeout, TimeoutError, ConnectionError):
return None
finally:
if sock is not None:
try:
sock.close()
except OSError:
pass
def _read_banner(sock: socket.socket) -> str | None:
"""Read the SSH version banner line from the socket."""
buf = b""
while len(buf) < _MAX_BANNER_LEN:
try:
byte = sock.recv(1)
except (OSError, socket.timeout, TimeoutError):
return None
if not byte:
return None
buf += byte
if buf.endswith(b"\n"):
break
try:
return buf.decode("utf-8", errors="replace").rstrip("\r\n")
except Exception:
return None
def _read_ssh_packet(sock: socket.socket) -> bytes | None:
"""
Read a single SSH binary packet and return its payload.
SSH binary packet format:
uint32 packet_length (not including itself or MAC)
byte padding_length
byte[] payload (packet_length - padding_length - 1)
byte[] padding
"""
header = _recv_exact(sock, 4)
if header is None:
return None
packet_length = struct.unpack("!I", header)[0]
if packet_length < 2 or packet_length > _MAX_PACKET_LEN:
return None
rest = _recv_exact(sock, packet_length)
if rest is None:
return None
padding_length = rest[0]
payload_length = packet_length - padding_length - 1
if payload_length < 1 or payload_length > len(rest) - 1:
return None
return rest[1 : 1 + payload_length]
def _recv_exact(sock: socket.socket, n: int) -> bytes | None:
"""Read exactly n bytes from socket, or None on failure."""
buf = b""
while len(buf) < n:
try:
chunk = sock.recv(n - len(buf))
except (OSError, socket.timeout, TimeoutError):
return None
if not chunk:
return None
buf += chunk
return buf
# ─── KEX_INIT parsing ──────────────────────────────────────────────────────
def _parse_kex_init(payload: bytes) -> dict[str, str] | None:
"""
Parse SSH_MSG_KEXINIT payload and extract the 10 name-list fields.
Payload layout:
byte SSH_MSG_KEXINIT (20)
byte[16] cookie
10 × name-list:
uint32 length
byte[] utf-8 string (comma-separated algorithm names)
bool first_kex_packet_follows
uint32 reserved
Returns dict with keys: kex_algorithms, server_host_key_algorithms,
encryption_client_to_server, encryption_server_to_client,
mac_client_to_server, mac_server_to_client,
compression_client_to_server, compression_server_to_client,
languages_client_to_server, languages_server_to_client.
"""
if len(payload) < 1 + _KEX_INIT_COOKIE_LEN + 4:
return None
offset = 1 + _KEX_INIT_COOKIE_LEN # skip type byte + cookie
field_names = [
"kex_algorithms",
"server_host_key_algorithms",
"encryption_client_to_server",
"encryption_server_to_client",
"mac_client_to_server",
"mac_server_to_client",
"compression_client_to_server",
"compression_server_to_client",
"languages_client_to_server",
"languages_server_to_client",
]
fields: dict[str, str] = {}
for name in field_names:
if offset + 4 > len(payload):
return None
length = struct.unpack("!I", payload[offset : offset + 4])[0]
offset += 4
if offset + length > len(payload):
return None
fields[name] = payload[offset : offset + length].decode(
"utf-8", errors="replace"
)
offset += length
return fields
# ─── HASSH computation ──────────────────────────────────────────────────────
def _compute_hassh(kex: str, enc: str, mac: str, comp: str) -> str:
"""
Compute HASSHServer hash: MD5 of "kex;enc_s2c;mac_s2c;comp_s2c".
Returns 32-character lowercase hex digest.
"""
raw = f"{kex};{enc};{mac};{comp}"
return hashlib.md5(raw.encode("utf-8"), usedforsecurity=False).hexdigest()
# ─── Public API ─────────────────────────────────────────────────────────────
@_traced("prober.hassh_server")
def hassh_server(
host: str,
port: int,
timeout: float = 5.0,
) -> dict[str, Any] | None:
"""
Connect to an SSH server and compute its HASSHServer fingerprint.
Returns a dict with the hash, banner, and raw algorithm fields,
or None if the host is not running an SSH server on the given port.
"""
result = _ssh_connect(host, port, timeout)
if result is None:
return None
banner, payload = result
fields = _parse_kex_init(payload)
if fields is None:
return None
kex = fields["kex_algorithms"]
enc = fields["encryption_server_to_client"]
mac = fields["mac_server_to_client"]
comp = fields["compression_server_to_client"]
return {
"hassh_server": _compute_hassh(kex, enc, mac, comp),
"banner": banner,
"kex_algorithms": kex,
"encryption_s2c": enc,
"mac_s2c": mac,
"compression_s2c": comp,
}

View File

@@ -1,506 +0,0 @@
"""
JARM TLS fingerprinting — pure stdlib implementation.
JARM sends 10 crafted TLS ClientHello packets to a target, each varying
TLS version, cipher suite order, extensions, and ALPN values. The
ServerHello responses are parsed and hashed to produce a 62-character
fingerprint that identifies the TLS server implementation.
Reference: https://github.com/salesforce/jarm
Only DECNET import is decnet.telemetry for tracing (zero-cost when disabled).
"""
from __future__ import annotations
import hashlib
import socket
import struct
import time
from typing import Any
from decnet.telemetry import traced as _traced
# ─── Constants ────────────────────────────────────────────────────────────────
JARM_EMPTY_HASH = "0" * 62
_INTER_PROBE_DELAY = 0.1 # seconds between probes to avoid IDS triggers
# TLS version bytes
_TLS_1_0 = b"\x03\x01"
_TLS_1_1 = b"\x03\x02"
_TLS_1_2 = b"\x03\x03"
_TLS_1_3 = b"\x03\x03" # TLS 1.3 uses 0x0303 in record layer
# TLS record types
_CONTENT_HANDSHAKE = 0x16
_HANDSHAKE_CLIENT_HELLO = 0x01
_HANDSHAKE_SERVER_HELLO = 0x02
# Extension types
_EXT_SERVER_NAME = 0x0000
_EXT_EC_POINT_FORMATS = 0x000B
_EXT_SUPPORTED_GROUPS = 0x000A
_EXT_SESSION_TICKET = 0x0023
_EXT_ENCRYPT_THEN_MAC = 0x0016
_EXT_EXTENDED_MASTER_SECRET = 0x0017
_EXT_SIGNATURE_ALGORITHMS = 0x000D
_EXT_SUPPORTED_VERSIONS = 0x002B
_EXT_PSK_KEY_EXCHANGE_MODES = 0x002D
_EXT_KEY_SHARE = 0x0033
_EXT_ALPN = 0x0010
_EXT_PADDING = 0x0015
# ─── Cipher suite lists per JARM spec ────────────────────────────────────────
# Forward cipher order (standard)
_CIPHERS_FORWARD = [
0x0016, 0x0033, 0x0067, 0xC09E, 0xC0A2, 0x009E, 0x0039, 0x006B,
0xC09F, 0xC0A3, 0x009F, 0x0045, 0x00BE, 0x0088, 0x00C4, 0x009A,
0xC008, 0xC009, 0xC023, 0xC0AC, 0xC0AE, 0xC02B, 0xC00A, 0xC024,
0xC0AD, 0xC0AF, 0xC02C, 0xC072, 0xC073, 0xCCA8, 0x1301, 0x1302,
0x1303, 0xC013, 0xC014, 0xC02F, 0x009C, 0xC02E, 0x002F, 0x0035,
0x000A, 0x0005, 0x0004,
]
# Reverse cipher order
_CIPHERS_REVERSE = list(reversed(_CIPHERS_FORWARD))
# TLS 1.3-only ciphers
_CIPHERS_TLS13 = [0x1301, 0x1302, 0x1303]
# Middle-out cipher order (interleaved from center)
def _middle_out(lst: list[int]) -> list[int]:
result: list[int] = []
mid = len(lst) // 2
for i in range(mid + 1):
if mid + i < len(lst):
result.append(lst[mid + i])
if mid - i >= 0 and mid - i != mid + i:
result.append(lst[mid - i])
return result
_CIPHERS_MIDDLE_OUT = _middle_out(_CIPHERS_FORWARD)
# Rare/uncommon extensions cipher list
_CIPHERS_RARE = [
0x0016, 0x0033, 0xC011, 0xC012, 0x0067, 0xC09E, 0xC0A2, 0x009E,
0x0039, 0x006B, 0xC09F, 0xC0A3, 0x009F, 0x0045, 0x00BE, 0x0088,
0x00C4, 0x009A, 0xC008, 0xC009, 0xC023, 0xC0AC, 0xC0AE, 0xC02B,
0xC00A, 0xC024, 0xC0AD, 0xC0AF, 0xC02C, 0xC072, 0xC073, 0xCCA8,
0x1301, 0x1302, 0x1303, 0xC013, 0xC014, 0xC02F, 0x009C, 0xC02E,
0x002F, 0x0035, 0x000A, 0x0005, 0x0004,
]
# ─── Probe definitions ────────────────────────────────────────────────────────
# Each probe: (tls_version, cipher_list, tls13_support, alpn, extensions_style)
# tls_version: record-layer version bytes
# cipher_list: which cipher suite ordering to use
# tls13_support: whether to include TLS 1.3 extensions (supported_versions, key_share, psk)
# alpn: ALPN protocol string or None
# extensions_style: "standard", "rare", or "no_extensions"
_PROBE_CONFIGS: list[dict[str, Any]] = [
# 0: TLS 1.2 forward
{"version": _TLS_1_2, "ciphers": _CIPHERS_FORWARD, "tls13": False, "alpn": None, "style": "standard"},
# 1: TLS 1.2 reverse
{"version": _TLS_1_2, "ciphers": _CIPHERS_REVERSE, "tls13": False, "alpn": None, "style": "standard"},
# 2: TLS 1.1 forward
{"version": _TLS_1_1, "ciphers": _CIPHERS_FORWARD, "tls13": False, "alpn": None, "style": "standard"},
# 3: TLS 1.3 forward
{"version": _TLS_1_2, "ciphers": _CIPHERS_FORWARD, "tls13": True, "alpn": "h2", "style": "standard"},
# 4: TLS 1.3 reverse
{"version": _TLS_1_2, "ciphers": _CIPHERS_REVERSE, "tls13": True, "alpn": "h2", "style": "standard"},
# 5: TLS 1.3 invalid (advertise 1.3 support but no key_share)
{"version": _TLS_1_2, "ciphers": _CIPHERS_FORWARD, "tls13": "no_key_share", "alpn": None, "style": "standard"},
# 6: TLS 1.3 middle-out
{"version": _TLS_1_2, "ciphers": _CIPHERS_MIDDLE_OUT, "tls13": True, "alpn": None, "style": "standard"},
# 7: TLS 1.0 forward
{"version": _TLS_1_0, "ciphers": _CIPHERS_FORWARD, "tls13": False, "alpn": None, "style": "standard"},
# 8: TLS 1.2 middle-out
{"version": _TLS_1_2, "ciphers": _CIPHERS_MIDDLE_OUT, "tls13": False, "alpn": None, "style": "standard"},
# 9: TLS 1.2 with rare extensions
{"version": _TLS_1_2, "ciphers": _CIPHERS_RARE, "tls13": False, "alpn": "http/1.1", "style": "rare"},
]
# ─── Extension builders ──────────────────────────────────────────────────────
def _ext(ext_type: int, data: bytes) -> bytes:
return struct.pack("!HH", ext_type, len(data)) + data
def _ext_sni(host: str) -> bytes:
host_bytes = host.encode("ascii")
# ServerNameList: length(2) + ServerName: type(1) + length(2) + name
sni_data = struct.pack("!HBH", len(host_bytes) + 3, 0, len(host_bytes)) + host_bytes
return _ext(_EXT_SERVER_NAME, sni_data)
def _ext_supported_groups() -> bytes:
groups = [0x0017, 0x0018, 0x0019, 0x001D, 0x0100, 0x0101] # secp256r1, secp384r1, secp521r1, x25519, ffdhe2048, ffdhe3072
data = struct.pack("!H", len(groups) * 2) + b"".join(struct.pack("!H", g) for g in groups)
return _ext(_EXT_SUPPORTED_GROUPS, data)
def _ext_ec_point_formats() -> bytes:
formats = b"\x00" # uncompressed only
return _ext(_EXT_EC_POINT_FORMATS, struct.pack("B", len(formats)) + formats)
def _ext_signature_algorithms() -> bytes:
algos = [
0x0401, 0x0501, 0x0601, # RSA PKCS1 SHA256/384/512
0x0201, # RSA PKCS1 SHA1
0x0403, 0x0503, 0x0603, # ECDSA SHA256/384/512
0x0203, # ECDSA SHA1
0x0804, 0x0805, 0x0806, # RSA-PSS SHA256/384/512
]
data = struct.pack("!H", len(algos) * 2) + b"".join(struct.pack("!H", a) for a in algos)
return _ext(_EXT_SIGNATURE_ALGORITHMS, data)
def _ext_supported_versions_13() -> bytes:
versions = [0x0304, 0x0303] # TLS 1.3, 1.2
data = struct.pack("B", len(versions) * 2) + b"".join(struct.pack("!H", v) for v in versions)
return _ext(_EXT_SUPPORTED_VERSIONS, data)
def _ext_psk_key_exchange_modes() -> bytes:
return _ext(_EXT_PSK_KEY_EXCHANGE_MODES, b"\x01\x01") # psk_dhe_ke
def _ext_key_share() -> bytes:
# x25519 key share with 32 random-looking bytes
key_data = b"\x00" * 32
entry = struct.pack("!HH", 0x001D, 32) + key_data # x25519 group
data = struct.pack("!H", len(entry)) + entry
return _ext(_EXT_KEY_SHARE, data)
def _ext_alpn(protocol: str) -> bytes:
proto_bytes = protocol.encode("ascii")
proto_entry = struct.pack("B", len(proto_bytes)) + proto_bytes
data = struct.pack("!H", len(proto_entry)) + proto_entry
return _ext(_EXT_ALPN, data)
def _ext_session_ticket() -> bytes:
return _ext(_EXT_SESSION_TICKET, b"")
def _ext_encrypt_then_mac() -> bytes:
return _ext(_EXT_ENCRYPT_THEN_MAC, b"")
def _ext_extended_master_secret() -> bytes:
return _ext(_EXT_EXTENDED_MASTER_SECRET, b"")
def _ext_padding(target_length: int, current_length: int) -> bytes:
pad_needed = target_length - current_length - 4 # 4 bytes for ext type + length
if pad_needed < 0:
return b""
return _ext(_EXT_PADDING, b"\x00" * pad_needed)
# ─── ClientHello builder ─────────────────────────────────────────────────────
def _build_client_hello(probe_index: int, host: str = "localhost") -> bytes:
"""
Construct one of 10 JARM-specified ClientHello packets.
Args:
probe_index: 0-9, selects the probe configuration
host: target hostname for SNI extension
Returns:
Complete TLS record bytes ready to send on the wire.
"""
cfg = _PROBE_CONFIGS[probe_index]
version: bytes = cfg["version"]
ciphers: list[int] = cfg["ciphers"]
tls13 = cfg["tls13"]
alpn: str | None = cfg["alpn"]
# Random (32 bytes)
random_bytes = b"\x00" * 32
# Session ID (32 bytes, all zeros)
session_id = b"\x00" * 32
# Cipher suites
cipher_bytes = b"".join(struct.pack("!H", c) for c in ciphers)
cipher_data = struct.pack("!H", len(cipher_bytes)) + cipher_bytes
# Compression methods (null only)
compression = b"\x01\x00"
# Extensions
extensions = b""
extensions += _ext_sni(host)
extensions += _ext_supported_groups()
extensions += _ext_ec_point_formats()
extensions += _ext_session_ticket()
extensions += _ext_encrypt_then_mac()
extensions += _ext_extended_master_secret()
extensions += _ext_signature_algorithms()
if tls13 == True: # noqa: E712
extensions += _ext_supported_versions_13()
extensions += _ext_psk_key_exchange_modes()
extensions += _ext_key_share()
elif tls13 == "no_key_share":
extensions += _ext_supported_versions_13()
extensions += _ext_psk_key_exchange_modes()
# Intentionally omit key_share
if alpn:
extensions += _ext_alpn(alpn)
ext_data = struct.pack("!H", len(extensions)) + extensions
# ClientHello body
body = (
version # client_version (2)
+ random_bytes # random (32)
+ struct.pack("B", len(session_id)) + session_id # session_id
+ cipher_data # cipher_suites
+ compression # compression_methods
+ ext_data # extensions
)
# Handshake header: type(1) + length(3)
handshake = struct.pack("B", _HANDSHAKE_CLIENT_HELLO) + struct.pack("!I", len(body))[1:] + body
# TLS record header: type(1) + version(2) + length(2)
record = struct.pack("B", _CONTENT_HANDSHAKE) + _TLS_1_0 + struct.pack("!H", len(handshake)) + handshake
return record
# ─── ServerHello parser ──────────────────────────────────────────────────────
def _parse_server_hello(data: bytes) -> str:
"""
Extract cipher suite and TLS version from a ServerHello response.
Returns a pipe-delimited string "cipher|version|extensions" that forms
one component of the JARM hash, or "|||" on parse failure.
"""
try:
if len(data) < 6:
return "|||"
# TLS record header
if data[0] != _CONTENT_HANDSHAKE:
return "|||"
struct.unpack_from("!H", data, 1)[0] # record_version (unused)
record_len = struct.unpack_from("!H", data, 3)[0]
hs = data[5: 5 + record_len]
if len(hs) < 4:
return "|||"
# Handshake header
if hs[0] != _HANDSHAKE_SERVER_HELLO:
return "|||"
hs_len = struct.unpack_from("!I", b"\x00" + hs[1:4])[0]
body = hs[4: 4 + hs_len]
if len(body) < 34:
return "|||"
pos = 0
# Server version
server_version = struct.unpack_from("!H", body, pos)[0]
pos += 2
# Random (32 bytes)
pos += 32
# Session ID
if pos >= len(body):
return "|||"
sid_len = body[pos]
pos += 1 + sid_len
# Cipher suite
if pos + 2 > len(body):
return "|||"
cipher = struct.unpack_from("!H", body, pos)[0]
pos += 2
# Compression method
if pos >= len(body):
return "|||"
pos += 1
# Parse extensions for supported_versions (to detect actual TLS 1.3)
actual_version = server_version
extensions_str = ""
if pos + 2 <= len(body):
ext_total = struct.unpack_from("!H", body, pos)[0]
pos += 2
ext_end = pos + ext_total
ext_types: list[str] = []
while pos + 4 <= ext_end and pos + 4 <= len(body):
ext_type = struct.unpack_from("!H", body, pos)[0]
ext_len = struct.unpack_from("!H", body, pos + 2)[0]
ext_types.append(f"{ext_type:04x}")
if ext_type == _EXT_SUPPORTED_VERSIONS and ext_len >= 2:
actual_version = struct.unpack_from("!H", body, pos + 4)[0]
pos += 4 + ext_len
extensions_str = "-".join(ext_types)
version_str = _version_to_str(actual_version)
cipher_str = f"{cipher:04x}"
return f"{cipher_str}|{version_str}|{extensions_str}"
except Exception:
return "|||"
def _version_to_str(version: int) -> str:
return {
0x0304: "tls13",
0x0303: "tls12",
0x0302: "tls11",
0x0301: "tls10",
0x0300: "ssl30",
}.get(version, f"{version:04x}")
# ─── Probe sender ────────────────────────────────────────────────────────────
@_traced("prober.jarm_send_probe")
def _send_probe(host: str, port: int, hello: bytes, timeout: float = 5.0) -> bytes | None:
"""
Open a TCP connection, send the ClientHello, and read the ServerHello.
Returns raw response bytes or None on any failure.
"""
try:
sock = socket.create_connection((host, port), timeout=timeout)
try:
sock.sendall(hello)
sock.settimeout(timeout)
response = b""
while True:
chunk = sock.recv(1484)
if not chunk:
break
response += chunk
# We only need the first TLS record (ServerHello)
if len(response) >= 5:
record_len = struct.unpack_from("!H", response, 3)[0]
if len(response) >= 5 + record_len:
break
return response if response else None
finally:
sock.close()
except (OSError, socket.error, socket.timeout):
return None
# ─── JARM hash computation ───────────────────────────────────────────────────
def _compute_jarm(responses: list[str]) -> str:
"""
Compute the final 62-character JARM hash from 10 probe response strings.
The first 30 characters are the raw cipher/version concatenation.
The remaining 32 characters are a truncated SHA256 of the extensions.
"""
if all(r == "|||" for r in responses):
return JARM_EMPTY_HASH
# Build the fuzzy hash
raw_parts: list[str] = []
ext_parts: list[str] = []
for r in responses:
parts = r.split("|")
if len(parts) >= 3 and parts[0] != "":
cipher = parts[0]
version = parts[1]
extensions = parts[2] if len(parts) > 2 else ""
# Map version to single char
ver_char = {
"tls13": "d", "tls12": "c", "tls11": "b",
"tls10": "a", "ssl30": "0",
}.get(version, "0")
raw_parts.append(f"{cipher}{ver_char}")
ext_parts.append(extensions)
else:
raw_parts.append("000")
ext_parts.append("")
# First 30 chars: cipher(4) + version(1) = 5 chars * 10 probes = 50... no
# JARM spec: first part is c|v per probe joined, then SHA256 of extensions
# Actual format: each response contributes 3 chars (cipher_first2 + ver_char)
# to the first 30, then all extensions hashed for the remaining 32.
fuzzy_raw = ""
for r in responses:
parts = r.split("|")
if len(parts) >= 3 and parts[0] != "":
cipher = parts[0] # 4-char hex
version = parts[1]
ver_char = {
"tls13": "d", "tls12": "c", "tls11": "b",
"tls10": "a", "ssl30": "0",
}.get(version, "0")
fuzzy_raw += f"{cipher[0:2]}{ver_char}"
else:
fuzzy_raw += "000"
# fuzzy_raw is 30 chars (3 * 10)
ext_str = ",".join(ext_parts)
ext_hash = hashlib.sha256(ext_str.encode()).hexdigest()[:32]
return fuzzy_raw + ext_hash
# ─── Public API ──────────────────────────────────────────────────────────────
@_traced("prober.jarm_hash")
def jarm_hash(host: str, port: int, timeout: float = 5.0) -> str:
"""
Compute the JARM fingerprint for a TLS server.
Sends 10 crafted ClientHello packets and hashes the responses.
Args:
host: target IP or hostname
port: target port
timeout: per-probe TCP timeout in seconds
Returns:
62-character JARM hash string, or all-zeros on total failure.
"""
responses: list[str] = []
for i in range(10):
hello = _build_client_hello(i, host=host)
raw = _send_probe(host, port, hello, timeout=timeout)
if raw is not None:
parsed = _parse_server_hello(raw)
responses.append(parsed)
else:
responses.append("|||")
if i < 9:
time.sleep(_INTER_PROBE_DELAY)
return _compute_jarm(responses)

View File

@@ -1,227 +0,0 @@
"""
TCP/IP stack fingerprinting via SYN-ACK analysis.
Sends a crafted TCP SYN packet to a target host:port, captures the
SYN-ACK response, and extracts OS/tool-identifying characteristics:
TTL, window size, DF bit, MSS, window scale, SACK support, timestamps,
and TCP options ordering.
Uses scapy for packet crafting and parsing. Requires root/CAP_NET_RAW.
"""
from __future__ import annotations
import hashlib
import random
from typing import Any
from decnet.telemetry import traced as _traced
# Lazy-import scapy to avoid breaking non-root usage of HASSH/JARM.
# The actual import happens inside functions that need it.
# ─── TCP option short codes ─────────────────────────────────────────────────
_OPT_CODES: dict[str, str] = {
"MSS": "M",
"WScale": "W",
"SAckOK": "S",
"SAck": "S",
"Timestamp": "T",
"NOP": "N",
"EOL": "E",
"AltChkSum": "A",
"AltChkSumOpt": "A",
"UTO": "U",
}
# ─── Packet construction ───────────────────────────────────────────────────
@_traced("prober.tcpfp_send_syn")
def _send_syn(
host: str,
port: int,
timeout: float,
) -> Any | None:
"""
Craft a TCP SYN with common options and send it. Returns the
SYN-ACK response packet or None on timeout/failure.
"""
from scapy.all import IP, TCP, conf, sr1
# Suppress scapy's noisy output
conf.verb = 0
src_port = random.randint(49152, 65535) # nosec B311 — ephemeral port, not crypto
pkt = (
IP(dst=host)
/ TCP(
sport=src_port,
dport=port,
flags="S",
options=[
("MSS", 1460),
("NOP", None),
("WScale", 7),
("NOP", None),
("NOP", None),
("Timestamp", (0, 0)),
("SAckOK", b""),
("EOL", None),
],
)
)
try:
resp = sr1(pkt, timeout=timeout, verbose=0)
except (OSError, PermissionError):
return None
if resp is None:
return None
# Verify it's a SYN-ACK (flags == 0x12)
from scapy.all import TCP as TCPLayer
if not resp.haslayer(TCPLayer):
return None
if resp[TCPLayer].flags != 0x12: # SYN-ACK
return None
# Send RST to clean up half-open connection
_send_rst(host, port, src_port, resp)
return resp
def _send_rst(
host: str,
dport: int,
sport: int,
resp: Any,
) -> None:
"""Send RST to clean up the half-open connection."""
try:
from scapy.all import IP, TCP, send
rst = (
IP(dst=host)
/ TCP(
sport=sport,
dport=dport,
flags="R",
seq=resp.ack,
)
)
send(rst, verbose=0)
except Exception: # nosec B110 — best-effort RST cleanup
pass
# ─── Response parsing ───────────────────────────────────────────────────────
def _parse_synack(resp: Any) -> dict[str, Any]:
"""
Extract fingerprint fields from a scapy SYN-ACK response packet.
"""
from scapy.all import IP, TCP
ip_layer = resp[IP]
tcp_layer = resp[TCP]
# IP fields
ttl = ip_layer.ttl
df_bit = 1 if (ip_layer.flags & 0x2) else 0 # DF = bit 1
ip_id = ip_layer.id
# TCP fields
window_size = tcp_layer.window
# Parse TCP options
mss = 0
window_scale = -1
sack_ok = 0
timestamp = 0
options_order = _extract_options_order(tcp_layer.options)
for opt_name, opt_value in tcp_layer.options:
if opt_name == "MSS":
mss = opt_value
elif opt_name == "WScale":
window_scale = opt_value
elif opt_name in ("SAckOK", "SAck"):
sack_ok = 1
elif opt_name == "Timestamp":
timestamp = 1
return {
"ttl": ttl,
"window_size": window_size,
"df_bit": df_bit,
"ip_id": ip_id,
"mss": mss,
"window_scale": window_scale,
"sack_ok": sack_ok,
"timestamp": timestamp,
"options_order": options_order,
}
def _extract_options_order(options: list[tuple[str, Any]]) -> str:
"""
Map scapy TCP option tuples to a short-code string.
E.g. [("MSS", 1460), ("NOP", None), ("WScale", 7)] → "M,N,W"
"""
codes = []
for opt_name, _ in options:
code = _OPT_CODES.get(opt_name, "?")
codes.append(code)
return ",".join(codes)
# ─── Fingerprint computation ───────────────────────────────────────────────
def _compute_fingerprint(fields: dict[str, Any]) -> tuple[str, str]:
"""
Compute fingerprint raw string and SHA256 hash from parsed fields.
Returns (raw_string, hash_hex_32).
"""
raw = (
f"{fields['ttl']}:{fields['window_size']}:{fields['df_bit']}:"
f"{fields['mss']}:{fields['window_scale']}:{fields['sack_ok']}:"
f"{fields['timestamp']}:{fields['options_order']}"
)
h = hashlib.sha256(raw.encode("utf-8")).hexdigest()[:32]
return raw, h
# ─── Public API ─────────────────────────────────────────────────────────────
@_traced("prober.tcp_fingerprint")
def tcp_fingerprint(
host: str,
port: int,
timeout: float = 5.0,
) -> dict[str, Any] | None:
"""
Send a TCP SYN to host:port and fingerprint the SYN-ACK response.
Returns a dict with the hash, raw fingerprint string, and individual
fields, or None if no SYN-ACK was received.
Requires root/CAP_NET_RAW.
"""
resp = _send_syn(host, port, timeout)
if resp is None:
return None
fields = _parse_synack(resp)
raw, h = _compute_fingerprint(fields)
return {
"tcpfp_hash": h,
"tcpfp_raw": raw,
**fields,
}

View File

@@ -1,478 +0,0 @@
"""
DECNET-PROBER standalone worker.
Runs as a detached host-level process. Discovers attacker IPs by tailing the
collector's JSON log file, then fingerprints them via multiple active probes:
- JARM (TLS server fingerprinting)
- HASSHServer (SSH server fingerprinting)
- TCP/IP stack fingerprinting (OS/tool identification)
Results are written as RFC 5424 syslog + JSON to the same log files.
Target discovery is fully automatic — every unique attacker IP seen in the
log stream gets probed. No manual target list required.
Tech debt: writing directly to the collector's log files couples the
prober to the collector's file format. A future refactor should introduce
a shared log-sink abstraction.
"""
from __future__ import annotations
import asyncio
import json
import re
from datetime import datetime, timezone
from pathlib import Path
from typing import Any
from decnet.logging import get_logger
from decnet.prober.hassh import hassh_server
from decnet.prober.jarm import JARM_EMPTY_HASH, jarm_hash
from decnet.prober.tcpfp import tcp_fingerprint
from decnet.telemetry import traced as _traced
logger = get_logger("prober")
# ─── Default ports per probe type ───────────────────────────────────────────
# JARM: common C2 callback / TLS server ports
DEFAULT_PROBE_PORTS: list[int] = [
443, 8443, 8080, 4443, 50050, 2222, 993, 995, 8888, 9001,
]
# HASSHServer: common SSH server ports
DEFAULT_SSH_PORTS: list[int] = [22, 2222, 22222, 2022]
# TCP/IP stack: probe on ports commonly open on attacker machines.
# Wide spread gives the best chance of a SYN-ACK for TTL/fingerprint extraction.
DEFAULT_TCPFP_PORTS: list[int] = [22, 80, 443, 8080, 8443, 445, 3389]
# ─── RFC 5424 formatting (inline, mirrors templates/*/decnet_logging.py) ─────
_FACILITY_LOCAL0 = 16
_SD_ID = "decnet@55555"
_SEVERITY_INFO = 6
_SEVERITY_WARNING = 4
_MAX_HOSTNAME = 255
_MAX_APPNAME = 48
_MAX_MSGID = 32
def _sd_escape(value: str) -> str:
return value.replace("\\", "\\\\").replace('"', '\\"').replace("]", "\\]")
def _sd_element(fields: dict[str, Any]) -> str:
if not fields:
return "-"
params = " ".join(f'{k}="{_sd_escape(str(v))}"' for k, v in fields.items())
return f"[{_SD_ID} {params}]"
def _syslog_line(
event_type: str,
severity: int = _SEVERITY_INFO,
msg: str | None = None,
**fields: Any,
) -> str:
pri = f"<{_FACILITY_LOCAL0 * 8 + severity}>"
ts = datetime.now(timezone.utc).isoformat()
hostname = "decnet-prober"
appname = "prober"
msgid = (event_type or "-")[:_MAX_MSGID]
sd = _sd_element(fields)
message = f" {msg}" if msg else ""
return f"{pri}1 {ts} {hostname} {appname} - {msgid} {sd}{message}"
# ─── RFC 5424 parser (subset of collector's, for JSON generation) ─────────────
_RFC5424_RE = re.compile(
r"^<\d+>1 "
r"(\S+) " # 1: TIMESTAMP
r"(\S+) " # 2: HOSTNAME
r"(\S+) " # 3: APP-NAME
r"- " # PROCID
r"(\S+) " # 4: MSGID (event_type)
r"(.+)$", # 5: SD + MSG
)
_SD_BLOCK_RE = re.compile(r'\[decnet@55555\s+(.*?)\]', re.DOTALL)
_PARAM_RE = re.compile(r'(\w+)="((?:[^"\\]|\\.)*)"')
_IP_FIELDS = ("src_ip", "src", "client_ip", "remote_ip", "ip", "target_ip")
def _parse_to_json(line: str) -> dict[str, Any] | None:
m = _RFC5424_RE.match(line)
if not m:
return None
ts_raw, decky, service, event_type, sd_rest = m.groups()
fields: dict[str, str] = {}
msg = ""
if sd_rest.startswith("["):
block = _SD_BLOCK_RE.search(sd_rest)
if block:
for k, v in _PARAM_RE.findall(block.group(1)):
fields[k] = v.replace('\\"', '"').replace("\\\\", "\\").replace("\\]", "]")
msg_match = re.search(r'\]\s+(.+)$', sd_rest)
if msg_match:
msg = msg_match.group(1).strip()
attacker_ip = "Unknown"
for fname in _IP_FIELDS:
if fname in fields:
attacker_ip = fields[fname]
break
try:
ts_formatted = datetime.fromisoformat(ts_raw).strftime("%Y-%m-%d %H:%M:%S")
except ValueError:
ts_formatted = ts_raw
return {
"timestamp": ts_formatted,
"decky": decky,
"service": service,
"event_type": event_type,
"attacker_ip": attacker_ip,
"fields": fields,
"msg": msg,
"raw_line": line,
}
# ─── Log writer ──────────────────────────────────────────────────────────────
def _write_event(
log_path: Path,
json_path: Path,
event_type: str,
severity: int = _SEVERITY_INFO,
msg: str | None = None,
**fields: Any,
) -> None:
line = _syslog_line(event_type, severity=severity, msg=msg, **fields)
with open(log_path, "a", encoding="utf-8") as f:
f.write(line + "\n")
f.flush()
parsed = _parse_to_json(line)
if parsed:
with open(json_path, "a", encoding="utf-8") as f:
f.write(json.dumps(parsed) + "\n")
f.flush()
# ─── Target discovery from log stream ────────────────────────────────────────
@_traced("prober.discover_attackers")
def _discover_attackers(json_path: Path, position: int) -> tuple[set[str], int]:
"""
Read new JSON log lines from the given position and extract unique
attacker IPs. Returns (new_ips, new_position).
Only considers IPs that are not "Unknown" and come from events that
indicate real attacker interaction (not prober's own events).
"""
new_ips: set[str] = set()
if not json_path.exists():
return new_ips, position
size = json_path.stat().st_size
if size < position:
position = 0 # file rotated
if size == position:
return new_ips, position
with open(json_path, "r", encoding="utf-8", errors="replace") as f:
f.seek(position)
while True:
line = f.readline()
if not line:
break
if not line.endswith("\n"):
break # partial line
try:
record = json.loads(line.strip())
except json.JSONDecodeError:
position = f.tell()
continue
# Skip our own events
if record.get("service") == "prober":
position = f.tell()
continue
ip = record.get("attacker_ip", "Unknown")
if ip != "Unknown" and ip:
new_ips.add(ip)
position = f.tell()
return new_ips, position
# ─── Probe cycle ─────────────────────────────────────────────────────────────
@_traced("prober.probe_cycle")
def _probe_cycle(
targets: set[str],
probed: dict[str, dict[str, set[int]]],
jarm_ports: list[int],
ssh_ports: list[int],
tcpfp_ports: list[int],
log_path: Path,
json_path: Path,
timeout: float = 5.0,
) -> None:
"""
Probe all known attacker IPs with JARM, HASSH, and TCP/IP fingerprinting.
Args:
targets: set of attacker IPs to probe
probed: dict mapping IP -> {probe_type -> set of ports already probed}
jarm_ports: TLS ports for JARM fingerprinting
ssh_ports: SSH ports for HASSHServer fingerprinting
tcpfp_ports: ports for TCP/IP stack fingerprinting
log_path: RFC 5424 log file
json_path: JSON log file
timeout: per-probe TCP timeout
"""
for ip in sorted(targets):
ip_probed = probed.setdefault(ip, {})
# Phase 1: JARM (TLS fingerprinting)
_jarm_phase(ip, ip_probed, jarm_ports, log_path, json_path, timeout)
# Phase 2: HASSHServer (SSH fingerprinting)
_hassh_phase(ip, ip_probed, ssh_ports, log_path, json_path, timeout)
# Phase 3: TCP/IP stack fingerprinting
_tcpfp_phase(ip, ip_probed, tcpfp_ports, log_path, json_path, timeout)
@_traced("prober.jarm_phase")
def _jarm_phase(
ip: str,
ip_probed: dict[str, set[int]],
ports: list[int],
log_path: Path,
json_path: Path,
timeout: float,
) -> None:
"""JARM-fingerprint an IP on the given TLS ports."""
done = ip_probed.setdefault("jarm", set())
for port in ports:
if port in done:
continue
try:
h = jarm_hash(ip, port, timeout=timeout)
done.add(port)
if h == JARM_EMPTY_HASH:
continue
_write_event(
log_path, json_path,
"jarm_fingerprint",
target_ip=ip,
target_port=str(port),
jarm_hash=h,
msg=f"JARM {ip}:{port} = {h}",
)
logger.info("prober: JARM %s:%d = %s", ip, port, h)
except Exception as exc:
done.add(port)
_write_event(
log_path, json_path,
"prober_error",
severity=_SEVERITY_WARNING,
target_ip=ip,
target_port=str(port),
error=str(exc),
msg=f"JARM probe failed for {ip}:{port}: {exc}",
)
logger.warning("prober: JARM probe failed %s:%d: %s", ip, port, exc)
@_traced("prober.hassh_phase")
def _hassh_phase(
ip: str,
ip_probed: dict[str, set[int]],
ports: list[int],
log_path: Path,
json_path: Path,
timeout: float,
) -> None:
"""HASSHServer-fingerprint an IP on the given SSH ports."""
done = ip_probed.setdefault("hassh", set())
for port in ports:
if port in done:
continue
try:
result = hassh_server(ip, port, timeout=timeout)
done.add(port)
if result is None:
continue
_write_event(
log_path, json_path,
"hassh_fingerprint",
target_ip=ip,
target_port=str(port),
hassh_server_hash=result["hassh_server"],
ssh_banner=result["banner"],
kex_algorithms=result["kex_algorithms"],
encryption_s2c=result["encryption_s2c"],
mac_s2c=result["mac_s2c"],
compression_s2c=result["compression_s2c"],
msg=f"HASSH {ip}:{port} = {result['hassh_server']}",
)
logger.info("prober: HASSH %s:%d = %s", ip, port, result["hassh_server"])
except Exception as exc:
done.add(port)
_write_event(
log_path, json_path,
"prober_error",
severity=_SEVERITY_WARNING,
target_ip=ip,
target_port=str(port),
error=str(exc),
msg=f"HASSH probe failed for {ip}:{port}: {exc}",
)
logger.warning("prober: HASSH probe failed %s:%d: %s", ip, port, exc)
@_traced("prober.tcpfp_phase")
def _tcpfp_phase(
ip: str,
ip_probed: dict[str, set[int]],
ports: list[int],
log_path: Path,
json_path: Path,
timeout: float,
) -> None:
"""TCP/IP stack fingerprint an IP on the given ports."""
done = ip_probed.setdefault("tcpfp", set())
for port in ports:
if port in done:
continue
try:
result = tcp_fingerprint(ip, port, timeout=timeout)
done.add(port)
if result is None:
continue
_write_event(
log_path, json_path,
"tcpfp_fingerprint",
target_ip=ip,
target_port=str(port),
tcpfp_hash=result["tcpfp_hash"],
tcpfp_raw=result["tcpfp_raw"],
ttl=str(result["ttl"]),
window_size=str(result["window_size"]),
df_bit=str(result["df_bit"]),
mss=str(result["mss"]),
window_scale=str(result["window_scale"]),
sack_ok=str(result["sack_ok"]),
timestamp=str(result["timestamp"]),
options_order=result["options_order"],
msg=f"TCPFP {ip}:{port} = {result['tcpfp_hash']}",
)
logger.info("prober: TCPFP %s:%d = %s", ip, port, result["tcpfp_hash"])
except Exception as exc:
done.add(port)
_write_event(
log_path, json_path,
"prober_error",
severity=_SEVERITY_WARNING,
target_ip=ip,
target_port=str(port),
error=str(exc),
msg=f"TCPFP probe failed for {ip}:{port}: {exc}",
)
logger.warning("prober: TCPFP probe failed %s:%d: %s", ip, port, exc)
# ─── Main worker ─────────────────────────────────────────────────────────────
@_traced("prober.worker")
async def prober_worker(
log_file: str,
interval: int = 300,
timeout: float = 5.0,
ports: list[int] | None = None,
ssh_ports: list[int] | None = None,
tcpfp_ports: list[int] | None = None,
) -> None:
"""
Main entry point for the standalone prober process.
Discovers attacker IPs automatically by tailing the JSON log file,
then fingerprints each IP via JARM, HASSH, and TCP/IP stack probes.
Args:
log_file: base path for log files (RFC 5424 to .log, JSON to .json)
interval: seconds between probe cycles
timeout: per-probe TCP timeout
ports: JARM TLS ports (defaults to DEFAULT_PROBE_PORTS)
ssh_ports: HASSH SSH ports (defaults to DEFAULT_SSH_PORTS)
tcpfp_ports: TCP fingerprint ports (defaults to DEFAULT_TCPFP_PORTS)
"""
jarm_ports = ports or DEFAULT_PROBE_PORTS
hassh_ports = ssh_ports or DEFAULT_SSH_PORTS
tcp_ports = tcpfp_ports or DEFAULT_TCPFP_PORTS
all_ports_str = (
f"jarm={','.join(str(p) for p in jarm_ports)} "
f"ssh={','.join(str(p) for p in hassh_ports)} "
f"tcpfp={','.join(str(p) for p in tcp_ports)}"
)
log_path = Path(log_file)
json_path = log_path.with_suffix(".json")
log_path.parent.mkdir(parents=True, exist_ok=True)
logger.info(
"prober started interval=%ds %s log=%s",
interval, all_ports_str, log_path,
)
_write_event(
log_path, json_path,
"prober_startup",
interval=str(interval),
probe_ports=all_ports_str,
msg=f"DECNET-PROBER started, interval {interval}s, {all_ports_str}",
)
known_attackers: set[str] = set()
probed: dict[str, dict[str, set[int]]] = {} # IP -> {type -> ports}
log_position: int = 0
while True:
# Discover new attacker IPs from the log stream
new_ips, log_position = await asyncio.to_thread(
_discover_attackers, json_path, log_position,
)
if new_ips - known_attackers:
fresh = new_ips - known_attackers
known_attackers.update(fresh)
logger.info(
"prober: discovered %d new attacker(s), total=%d",
len(fresh), len(known_attackers),
)
if known_attackers:
await asyncio.to_thread(
_probe_cycle, known_attackers, probed,
jarm_ports, hassh_ports, tcp_ports,
log_path, json_path, timeout,
)
await asyncio.sleep(interval)

View File

@@ -1,5 +0,0 @@
"""DECNET profiler — standalone attacker profile builder worker."""
from decnet.profiler.worker import attacker_profile_worker
__all__ = ["attacker_profile_worker"]

View File

@@ -1,602 +0,0 @@
"""
Behavioral and timing analysis for DECNET attacker profiles.
Consumes the chronological `LogEvent` stream already built by
`decnet.correlation.engine.CorrelationEngine` and derives per-IP metrics:
- Inter-event timing statistics (mean / median / stdev / min / max)
- Coefficient-of-variation (jitter metric)
- Beaconing vs. interactive vs. scanning vs. brute_force vs. slow_scan
classification
- Tool attribution against known C2 frameworks (Cobalt Strike, Sliver,
Havoc, Mythic) using default beacon/jitter profiles — returns a list,
since multiple tools can be in use simultaneously
- Header-based tool detection (Nmap NSE, Gophish, Nikto, sqlmap, etc.)
from HTTP request events
- Recon → exfil phase sequencing (latency between the last recon event
and the first exfil-like event)
- OS / TCP fingerprint + retransmit rollup from sniffer-emitted events,
with TTL-based fallback when p0f returns no match
Pure-Python; no external dependencies. All functions are safe to call from
both sync and async contexts.
"""
from __future__ import annotations
import json
import re
import statistics
from collections import Counter
from typing import Any
from decnet.correlation.parser import LogEvent
from decnet.telemetry import traced as _traced, get_tracer as _get_tracer
# ─── Event-type taxonomy ────────────────────────────────────────────────────
# Sniffer-emitted packet events that feed into fingerprint rollup.
_SNIFFER_SYN_EVENT: str = "tcp_syn_fingerprint"
_SNIFFER_FLOW_EVENT: str = "tcp_flow_timing"
# Prober-emitted active-probe result (SYN-ACK fingerprint of attacker machine).
_PROBER_TCPFP_EVENT: str = "tcpfp_fingerprint"
# Canonical initial TTL for each coarse OS bucket. Used to derive hop
# distance when only the observed TTL is available (prober path).
_INITIAL_TTL: dict[str, int] = {
"linux": 64,
"windows": 128,
"embedded": 255,
}
# Events that signal "recon" phase (scans, probes, auth attempts).
_RECON_EVENT_TYPES: frozenset[str] = frozenset({
"scan", "connection", "banner", "probe",
"login_attempt", "auth", "auth_failure",
})
# Events that signal "exfil" / action-on-objective phase.
_EXFIL_EVENT_TYPES: frozenset[str] = frozenset({
"download", "upload", "file_transfer", "data_exfil",
"command", "exec", "query", "shell_input",
})
# Fields carrying payload byte counts (for "large payload" detection).
_PAYLOAD_SIZE_FIELDS: tuple[str, ...] = ("bytes", "size", "content_length")
# ─── C2 tool attribution signatures (beacon timing) ─────────────────────────
#
# Each entry lists the default beacon cadence profile of a popular C2.
# A profile *matches* an attacker when:
# - mean inter-event time is within ±`interval_tolerance` seconds, AND
# - jitter (cv = stdev / mean) is within ±`jitter_tolerance`
#
# Multiple matches are all returned (attacker may run multiple implants).
_TOOL_SIGNATURES: tuple[dict[str, Any], ...] = (
{
"name": "cobalt_strike",
"interval_s": 60.0,
"interval_tolerance_s": 8.0,
"jitter_cv": 0.20,
"jitter_tolerance": 0.05,
},
{
"name": "sliver",
"interval_s": 60.0,
"interval_tolerance_s": 10.0,
"jitter_cv": 0.30,
"jitter_tolerance": 0.08,
},
{
"name": "havoc",
"interval_s": 45.0,
"interval_tolerance_s": 8.0,
"jitter_cv": 0.10,
"jitter_tolerance": 0.03,
},
{
"name": "mythic",
"interval_s": 30.0,
"interval_tolerance_s": 6.0,
"jitter_cv": 0.15,
"jitter_tolerance": 0.03,
},
)
# ─── Header-based tool signatures ───────────────────────────────────────────
#
# Scanned against HTTP `request` events. `pattern` is a case-insensitive
# substring (or a regex anchored with ^ if it starts with that character).
# `header` is matched case-insensitively against the event's headers dict.
_HEADER_TOOL_SIGNATURES: tuple[dict[str, str], ...] = (
{"name": "nmap", "header": "user-agent", "pattern": "Nmap Scripting Engine"},
{"name": "gophish", "header": "x-mailer", "pattern": "gophish"},
{"name": "nikto", "header": "user-agent", "pattern": "Nikto"},
{"name": "sqlmap", "header": "user-agent", "pattern": "sqlmap"},
{"name": "nuclei", "header": "user-agent", "pattern": "Nuclei"},
{"name": "masscan", "header": "user-agent", "pattern": "masscan"},
{"name": "zgrab", "header": "user-agent", "pattern": "zgrab"},
{"name": "metasploit", "header": "user-agent", "pattern": "Metasploit"},
{"name": "curl", "header": "user-agent", "pattern": "^curl/"},
{"name": "python_requests", "header": "user-agent", "pattern": "python-requests"},
{"name": "gobuster", "header": "user-agent", "pattern": "gobuster"},
{"name": "dirbuster", "header": "user-agent", "pattern": "DirBuster"},
{"name": "hydra", "header": "user-agent", "pattern": "hydra"},
{"name": "wfuzz", "header": "user-agent", "pattern": "Wfuzz"},
)
# ─── TTL → coarse OS bucket (fallback when p0f returns nothing) ─────────────
def _os_from_ttl(ttl_str: str | None) -> str | None:
"""Derive a coarse OS guess from observed TTL when p0f has no match."""
if not ttl_str:
return None
try:
ttl = int(ttl_str)
except (TypeError, ValueError):
return None
if 55 <= ttl <= 70:
return "linux"
if 115 <= ttl <= 135:
return "windows"
if 235 <= ttl <= 255:
return "embedded"
return None
# ─── Timing stats ───────────────────────────────────────────────────────────
@_traced("profiler.timing_stats")
def timing_stats(events: list[LogEvent]) -> dict[str, Any]:
"""
Compute inter-arrival-time statistics across *events* (sorted by ts).
Returns a dict with:
mean_iat_s, median_iat_s, stdev_iat_s, min_iat_s, max_iat_s, cv,
event_count, duration_s
For n < 2 events the interval-based fields are None/0.
"""
if not events:
return {
"event_count": 0,
"duration_s": 0.0,
"mean_iat_s": None,
"median_iat_s": None,
"stdev_iat_s": None,
"min_iat_s": None,
"max_iat_s": None,
"cv": None,
}
sorted_events = sorted(events, key=lambda e: e.timestamp)
duration_s = (sorted_events[-1].timestamp - sorted_events[0].timestamp).total_seconds()
if len(sorted_events) < 2:
return {
"event_count": len(sorted_events),
"duration_s": round(duration_s, 3),
"mean_iat_s": None,
"median_iat_s": None,
"stdev_iat_s": None,
"min_iat_s": None,
"max_iat_s": None,
"cv": None,
}
iats = [
(sorted_events[i].timestamp - sorted_events[i - 1].timestamp).total_seconds()
for i in range(1, len(sorted_events))
]
# Exclude spuriously-negative (clock-skew) intervals.
iats = [v for v in iats if v >= 0]
if not iats:
return {
"event_count": len(sorted_events),
"duration_s": round(duration_s, 3),
"mean_iat_s": None,
"median_iat_s": None,
"stdev_iat_s": None,
"min_iat_s": None,
"max_iat_s": None,
"cv": None,
}
mean = statistics.fmean(iats)
median = statistics.median(iats)
stdev = statistics.pstdev(iats) if len(iats) > 1 else 0.0
cv = (stdev / mean) if mean > 0 else None
return {
"event_count": len(sorted_events),
"duration_s": round(duration_s, 3),
"mean_iat_s": round(mean, 3),
"median_iat_s": round(median, 3),
"stdev_iat_s": round(stdev, 3),
"min_iat_s": round(min(iats), 3),
"max_iat_s": round(max(iats), 3),
"cv": round(cv, 4) if cv is not None else None,
}
# ─── Behavior classification ────────────────────────────────────────────────
@_traced("profiler.classify_behavior")
def classify_behavior(stats: dict[str, Any], services_count: int) -> str:
"""
Coarse behavior bucket:
beaconing | interactive | scanning | brute_force | slow_scan | mixed | unknown
Heuristics (evaluated in priority order):
* `scanning` — ≥ 3 services touched OR mean IAT < 2 s, ≥ 3 events
* `brute_force` — 1 service, n ≥ 8, mean IAT < 5 s, CV < 0.6
* `beaconing` — CV < 0.35, mean IAT ≥ 5 s, ≥ 4 events
* `slow_scan` — ≥ 2 services, mean IAT ≥ 10 s, ≥ 4 events
* `interactive` — mean IAT < 5 s AND CV ≥ 0.5, ≥ 6 events
* `mixed` — catch-all for sessions with enough data
* `unknown` — too few data points
"""
n = stats.get("event_count") or 0
mean = stats.get("mean_iat_s")
cv = stats.get("cv")
if n < 3 or mean is None:
return "unknown"
# Slow scan / low-and-slow: multiple services with long gaps.
# Must be checked before generic scanning so slow multi-service sessions
# don't get mis-bucketed as a fast sweep.
if services_count >= 2 and mean >= 10.0 and n >= 4:
return "slow_scan"
# Scanning: broad service sweep (multi-service) or very rapid single-service bursts.
if n >= 3 and (
(services_count >= 3 and mean < 10.0)
or (services_count >= 2 and mean < 2.0)
):
return "scanning"
# Brute force: hammering one service rapidly and repeatedly.
if services_count == 1 and n >= 8 and mean < 5.0 and cv is not None and cv < 0.6:
return "brute_force"
# Beaconing: regular cadence over multiple events.
if cv is not None and cv < 0.35 and mean >= 5.0 and n >= 4:
return "beaconing"
# Interactive: short but irregular bursts (human or tool with think time).
if cv is not None and cv >= 0.5 and mean < 5.0 and n >= 6:
return "interactive"
return "mixed"
# ─── C2 tool attribution (beacon timing) ────────────────────────────────────
def guess_tools(mean_iat_s: float | None, cv: float | None) -> list[str]:
"""
Match (mean_iat, cv) against known C2 default beacon profiles.
Returns a list of all matching tool names (may be empty). Multiple
matches are all returned because an attacker can run several implants.
"""
if mean_iat_s is None or cv is None:
return []
hits: list[str] = []
for sig in _TOOL_SIGNATURES:
if abs(mean_iat_s - sig["interval_s"]) > sig["interval_tolerance_s"]:
continue
if abs(cv - sig["jitter_cv"]) > sig["jitter_tolerance"]:
continue
hits.append(sig["name"])
return hits
# Keep the old name as an alias so callers that expected a single string still
# compile, but mark it deprecated. Returns the first hit or None.
def guess_tool(mean_iat_s: float | None, cv: float | None) -> str | None:
"""Deprecated: use guess_tools() instead."""
hits = guess_tools(mean_iat_s, cv)
if len(hits) == 1:
return hits[0]
return None
# ─── Header-based tool detection ────────────────────────────────────────────
@_traced("profiler.detect_tools_from_headers")
def detect_tools_from_headers(events: list[LogEvent]) -> list[str]:
"""
Scan HTTP `request` events for tool-identifying headers.
Checks User-Agent, X-Mailer, and other headers case-insensitively
against `_HEADER_TOOL_SIGNATURES`. Returns a deduplicated list of
matched tool names in detection order.
"""
found: list[str] = []
seen: set[str] = set()
for e in events:
if e.event_type != "request":
continue
raw_headers = e.fields.get("headers")
if not raw_headers:
continue
# headers may arrive as a JSON string, a Python-repr string (legacy),
# or a dict already (in-memory / test paths).
if isinstance(raw_headers, str):
try:
headers: dict[str, str] = json.loads(raw_headers)
except (json.JSONDecodeError, ValueError):
# Backward-compat: events written before the JSON-encode fix
# were serialized as Python repr via str(dict). ast.literal_eval
# handles that safely (no arbitrary code execution).
try:
import ast as _ast
_parsed = _ast.literal_eval(raw_headers)
if isinstance(_parsed, dict):
headers = _parsed
else:
continue
except Exception: # nosec B112 — skip unparseable header values
continue
elif isinstance(raw_headers, dict):
headers = raw_headers
else:
continue
# Normalise header keys to lowercase for matching.
lc_headers: dict[str, str] = {k.lower(): str(v) for k, v in headers.items()}
for sig in _HEADER_TOOL_SIGNATURES:
name = sig["name"]
if name in seen:
continue
value = lc_headers.get(sig["header"])
if value is None:
continue
pattern = sig["pattern"]
if pattern.startswith("^"):
if re.match(pattern, value, re.IGNORECASE):
found.append(name)
seen.add(name)
else:
if pattern.lower() in value.lower():
found.append(name)
seen.add(name)
return found
# ─── Phase sequencing ───────────────────────────────────────────────────────
@_traced("profiler.phase_sequence")
def phase_sequence(events: list[LogEvent]) -> dict[str, Any]:
"""
Derive recon→exfil phase transition info.
Returns:
recon_end_ts : ISO timestamp of last recon-class event (or None)
exfil_start_ts : ISO timestamp of first exfil-class event (or None)
exfil_latency_s : seconds between them (None if not both present)
large_payload_count: count of events whose *fields* report a payload
≥ 1 MiB (heuristic for bulk data transfer)
"""
recon_end = None
exfil_start = None
large_payload_count = 0
for e in sorted(events, key=lambda x: x.timestamp):
if e.event_type in _RECON_EVENT_TYPES:
recon_end = e.timestamp
elif e.event_type in _EXFIL_EVENT_TYPES and exfil_start is None:
exfil_start = e.timestamp
for fname in _PAYLOAD_SIZE_FIELDS:
raw = e.fields.get(fname)
if raw is None:
continue
try:
if int(raw) >= 1_048_576:
large_payload_count += 1
break
except (TypeError, ValueError):
continue
latency: float | None = None
if recon_end is not None and exfil_start is not None and exfil_start >= recon_end:
latency = round((exfil_start - recon_end).total_seconds(), 3)
return {
"recon_end_ts": recon_end.isoformat() if recon_end else None,
"exfil_start_ts": exfil_start.isoformat() if exfil_start else None,
"exfil_latency_s": latency,
"large_payload_count": large_payload_count,
}
# ─── Sniffer rollup (OS fingerprint + retransmits) ──────────────────────────
@_traced("profiler.sniffer_rollup")
def sniffer_rollup(events: list[LogEvent]) -> dict[str, Any]:
"""
Roll up sniffer-emitted `tcp_syn_fingerprint` and `tcp_flow_timing`
events into a per-attacker summary.
OS guess priority:
1. Modal p0f label from os_guess field (if not "unknown"/empty).
2. TTL-based coarse bucket (linux / windows / embedded) as fallback.
Hop distance: median of non-zero reported values only.
"""
os_guesses: list[str] = []
ttl_values: list[str] = []
hops: list[int] = []
tcp_fp: dict[str, Any] | None = None
retransmits = 0
for e in events:
if e.event_type == _SNIFFER_SYN_EVENT:
og = e.fields.get("os_guess")
if og and og != "unknown":
os_guesses.append(og)
# Collect raw TTL for fallback OS derivation.
ttl_raw = e.fields.get("ttl") or e.fields.get("initial_ttl")
if ttl_raw:
ttl_values.append(ttl_raw)
# Only include hop distances that are valid and non-zero.
hop_raw = e.fields.get("hop_distance")
if hop_raw:
try:
hop_val = int(hop_raw)
if hop_val > 0:
hops.append(hop_val)
except (TypeError, ValueError):
pass
# Keep the latest fingerprint snapshot.
tcp_fp = {
"window": _int_or_none(e.fields.get("window")),
"wscale": _int_or_none(e.fields.get("wscale")),
"mss": _int_or_none(e.fields.get("mss")),
"options_sig": e.fields.get("options_sig", ""),
"has_sack": e.fields.get("has_sack") == "true",
"has_timestamps": e.fields.get("has_timestamps") == "true",
}
elif e.event_type == _SNIFFER_FLOW_EVENT:
try:
retransmits += int(e.fields.get("retransmits", "0"))
except (TypeError, ValueError):
pass
elif e.event_type == _PROBER_TCPFP_EVENT:
# Active-probe result: prober sent SYN to attacker, got SYN-ACK back.
# Field names differ from the passive sniffer (different emitter).
ttl_raw = e.fields.get("ttl")
if ttl_raw:
ttl_values.append(ttl_raw)
# Derive hop distance from observed TTL vs canonical initial TTL.
os_hint = _os_from_ttl(ttl_raw)
if os_hint:
initial = _INITIAL_TTL.get(os_hint)
if initial:
try:
hop_val = initial - int(ttl_raw)
if hop_val > 0:
hops.append(hop_val)
except (TypeError, ValueError):
pass
# Prober uses window_size/window_scale/options_order instead of
# the sniffer's window/wscale/options_sig.
tcp_fp = {
"window": _int_or_none(e.fields.get("window_size")),
"wscale": _int_or_none(e.fields.get("window_scale")),
"mss": _int_or_none(e.fields.get("mss")),
"options_sig": e.fields.get("options_order", ""),
"has_sack": e.fields.get("sack_ok") == "1",
"has_timestamps": e.fields.get("timestamp") == "1",
}
# Mode for the OS bucket — most frequently observed label.
os_guess: str | None = None
if os_guesses:
os_guess = Counter(os_guesses).most_common(1)[0][0]
else:
# TTL-based fallback: use the most common observed TTL value.
if ttl_values:
modal_ttl = Counter(ttl_values).most_common(1)[0][0]
os_guess = _os_from_ttl(modal_ttl)
# Median hop distance (robust to the occasional weird TTL).
hop_distance: int | None = None
if hops:
hop_distance = int(statistics.median(hops))
return {
"os_guess": os_guess,
"hop_distance": hop_distance,
"tcp_fingerprint": tcp_fp or {},
"retransmit_count": retransmits,
}
def _int_or_none(v: Any) -> int | None:
if v is None or v == "":
return None
try:
return int(v)
except (TypeError, ValueError):
return None
# ─── Composite: build the full AttackerBehavior record ──────────────────────
@_traced("profiler.build_behavior_record")
def build_behavior_record(events: list[LogEvent]) -> dict[str, Any]:
"""
Build the dict to persist in the `attacker_behavior` table.
Callers (profiler worker) pre-serialize JSON-typed fields; we do the
JSON encoding here to keep the repo layer schema-agnostic.
"""
# Timing stats are computed across *all* events (not filtered), because
# a C2 beacon often reuses the same "connection" event_type on each
# check-in. Filtering would throw that signal away.
stats = timing_stats(events)
services = {e.service for e in events}
behavior = classify_behavior(stats, len(services))
rollup = sniffer_rollup(events)
phase = phase_sequence(events)
# Combine beacon-timing tool matches with header-based detections.
beacon_tools = guess_tools(stats.get("mean_iat_s"), stats.get("cv"))
header_tools = detect_tools_from_headers(events)
all_tools: list[str] = list(dict.fromkeys(beacon_tools + header_tools)) # dedup, preserve order
# Promote TCP-level scanner identification to tool_guesses.
# p0f fingerprints nmap from the TCP handshake alone — this fires even
# when no HTTP service is present, making it far more reliable than the
# header-based path for raw port scans.
if rollup["os_guess"] == "nmap" and "nmap" not in all_tools:
all_tools.insert(0, "nmap")
# Beacon-specific projection: only surface interval/jitter when we've
# classified the flow as beaconing (otherwise these numbers are noise).
beacon_interval_s: float | None = None
beacon_jitter_pct: float | None = None
if behavior == "beaconing":
beacon_interval_s = stats.get("mean_iat_s")
cv = stats.get("cv")
beacon_jitter_pct = round(cv * 100, 2) if cv is not None else None
_tracer = _get_tracer("profiler")
with _tracer.start_as_current_span("profiler.behavior_summary") as _span:
_span.set_attribute("behavior_class", behavior)
_span.set_attribute("os_guess", rollup["os_guess"] or "unknown")
_span.set_attribute("tool_count", len(all_tools))
_span.set_attribute("event_count", stats.get("event_count", 0))
if all_tools:
_span.set_attribute("tools", ",".join(all_tools))
return {
"os_guess": rollup["os_guess"],
"hop_distance": rollup["hop_distance"],
"tcp_fingerprint": json.dumps(rollup["tcp_fingerprint"]),
"retransmit_count": rollup["retransmit_count"],
"behavior_class": behavior,
"beacon_interval_s": beacon_interval_s,
"beacon_jitter_pct": beacon_jitter_pct,
"tool_guesses": json.dumps(all_tools),
"timing_stats": json.dumps(stats),
"phase_sequence": json.dumps(phase),
}

View File

@@ -1,215 +0,0 @@
"""
Attacker profile builder — incremental background worker.
Maintains a persistent CorrelationEngine and a log-ID cursor across cycles.
On cold start (first cycle or process restart), performs one full build from
all stored logs. Subsequent cycles fetch only new logs via the cursor,
ingest them into the existing engine, and rebuild profiles for affected IPs
only.
Complexity per cycle: O(new_logs + affected_ips) instead of O(total_logs²).
"""
from __future__ import annotations
import asyncio
import json
from dataclasses import dataclass, field
from datetime import datetime, timezone
from typing import Any
from decnet.correlation.engine import CorrelationEngine
from decnet.correlation.parser import LogEvent
from decnet.logging import get_logger
from decnet.profiler.behavioral import build_behavior_record
from decnet.telemetry import traced as _traced, get_tracer as _get_tracer
from decnet.web.db.repository import BaseRepository
logger = get_logger("attacker_worker")
_BATCH_SIZE = 500
_STATE_KEY = "attacker_worker_cursor"
# Event types that indicate active command/query execution (not just connection/scan)
_COMMAND_EVENT_TYPES = frozenset({
"command", "exec", "query", "input", "shell_input",
"execute", "run", "sql_query", "redis_command",
})
# Fields that carry the executed command/query text
_COMMAND_FIELDS = ("command", "query", "input", "line", "sql", "cmd")
@dataclass
class _WorkerState:
engine: CorrelationEngine = field(default_factory=CorrelationEngine)
last_log_id: int = 0
initialized: bool = False
async def attacker_profile_worker(repo: BaseRepository, *, interval: int = 30) -> None:
"""Periodically updates the Attacker table incrementally. Designed to run as an asyncio Task."""
logger.info("attacker profile worker started interval=%ds", interval)
state = _WorkerState()
_saved_cursor = await repo.get_state(_STATE_KEY)
if _saved_cursor:
state.last_log_id = _saved_cursor.get("last_log_id", 0)
state.initialized = True
logger.info("attacker worker: resumed from cursor last_log_id=%d", state.last_log_id)
while True:
await asyncio.sleep(interval)
try:
await _incremental_update(repo, state)
except Exception as exc:
logger.error("attacker worker: update failed: %s", exc)
@_traced("profiler.incremental_update")
async def _incremental_update(repo: BaseRepository, state: _WorkerState) -> None:
was_cold = not state.initialized
affected_ips: set[str] = set()
while True:
batch = await repo.get_logs_after_id(state.last_log_id, limit=_BATCH_SIZE)
if not batch:
break
for row in batch:
event = state.engine.ingest(row["raw_line"])
if event and event.attacker_ip:
affected_ips.add(event.attacker_ip)
state.last_log_id = row["id"]
await asyncio.sleep(0) # yield to event loop after each batch
if len(batch) < _BATCH_SIZE:
break
state.initialized = True
if not affected_ips:
await repo.set_state(_STATE_KEY, {"last_log_id": state.last_log_id})
return
await _update_profiles(repo, state, affected_ips)
await repo.set_state(_STATE_KEY, {"last_log_id": state.last_log_id})
if was_cold:
logger.info("attacker worker: cold start rebuilt %d profiles", len(affected_ips))
else:
logger.info("attacker worker: updated %d profiles (incremental)", len(affected_ips))
@_traced("profiler.update_profiles")
async def _update_profiles(
repo: BaseRepository,
state: _WorkerState,
ips: set[str],
) -> None:
traversal_map = {t.attacker_ip: t for t in state.engine.traversals(min_deckies=2)}
bounties_map = await repo.get_bounties_for_ips(ips)
_tracer = _get_tracer("profiler")
for ip in ips:
events = state.engine._events.get(ip, [])
if not events:
continue
with _tracer.start_as_current_span("profiler.process_ip") as _span:
_span.set_attribute("attacker_ip", ip)
_span.set_attribute("event_count", len(events))
traversal = traversal_map.get(ip)
bounties = bounties_map.get(ip, [])
commands = _extract_commands_from_events(events)
record = _build_record(ip, events, traversal, bounties, commands)
attacker_uuid = await repo.upsert_attacker(record)
_span.set_attribute("is_traversal", traversal is not None)
_span.set_attribute("bounty_count", len(bounties))
_span.set_attribute("command_count", len(commands))
# Behavioral / fingerprint rollup lives in a sibling table so failures
# here never block the core attacker profile upsert.
try:
behavior = build_behavior_record(events)
await repo.upsert_attacker_behavior(attacker_uuid, behavior)
except Exception as exc:
_span.record_exception(exc)
logger.error("attacker worker: behavior upsert failed for %s: %s", ip, exc)
def _build_record(
ip: str,
events: list[LogEvent],
traversal: Any,
bounties: list[dict[str, Any]],
commands: list[dict[str, Any]],
) -> dict[str, Any]:
services = sorted({e.service for e in events})
deckies = (
traversal.deckies
if traversal
else _first_contact_deckies(events)
)
fingerprints = [b for b in bounties if b.get("bounty_type") == "fingerprint"]
credential_count = sum(1 for b in bounties if b.get("bounty_type") == "credential")
return {
"ip": ip,
"first_seen": min(e.timestamp for e in events),
"last_seen": max(e.timestamp for e in events),
"event_count": len(events),
"service_count": len(services),
"decky_count": len({e.decky for e in events}),
"services": json.dumps(services),
"deckies": json.dumps(deckies),
"traversal_path": traversal.path if traversal else None,
"is_traversal": traversal is not None,
"bounty_count": len(bounties),
"credential_count": credential_count,
"fingerprints": json.dumps(fingerprints),
"commands": json.dumps(commands),
"updated_at": datetime.now(timezone.utc),
}
def _first_contact_deckies(events: list[LogEvent]) -> list[str]:
"""Return unique deckies in first-contact order (for non-traversal attackers)."""
seen: list[str] = []
for e in sorted(events, key=lambda x: x.timestamp):
if e.decky not in seen:
seen.append(e.decky)
return seen
def _extract_commands_from_events(events: list[LogEvent]) -> list[dict[str, Any]]:
"""
Extract executed commands from LogEvent objects.
Works directly on LogEvent.fields (already a dict), so no JSON parsing needed.
"""
commands: list[dict[str, Any]] = []
for event in events:
if event.event_type not in _COMMAND_EVENT_TYPES:
continue
cmd_text: str | None = None
for key in _COMMAND_FIELDS:
val = event.fields.get(key)
if val:
cmd_text = str(val)
break
if not cmd_text:
continue
commands.append({
"service": event.service,
"decky": event.decky,
"command": cmd_text,
"timestamp": event.timestamp.isoformat(),
})
return commands

View File

@@ -13,7 +13,6 @@ class BaseService(ABC):
name: str # unique slug, e.g. "ssh", "smb" name: str # unique slug, e.g. "ssh", "smb"
ports: list[int] # ports this service listens on inside the container ports: list[int] # ports this service listens on inside the container
default_image: str # Docker image tag, or "build" if a Dockerfile is needed default_image: str # Docker image tag, or "build" if a Dockerfile is needed
fleet_singleton: bool = False # True = runs once fleet-wide, not per-decky
@abstractmethod @abstractmethod
def compose_fragment( def compose_fragment(

View File

@@ -1,59 +0,0 @@
import json
from pathlib import Path
from decnet.services.base import BaseService
TEMPLATES_DIR = Path(__file__).parent.parent.parent / "templates" / "https"
class HTTPSService(BaseService):
name = "https"
ports = [443]
default_image = "build"
def compose_fragment(
self,
decky_name: str,
log_target: str | None = None,
service_cfg: dict | None = None,
) -> dict:
cfg = service_cfg or {}
fragment: dict = {
"build": {"context": str(TEMPLATES_DIR)},
"container_name": f"{decky_name}-https",
"restart": "unless-stopped",
"environment": {
"NODE_NAME": decky_name,
},
}
if log_target:
fragment["environment"]["LOG_TARGET"] = log_target
# Optional persona overrides — only injected when explicitly set
if "server_header" in cfg:
fragment["environment"]["SERVER_HEADER"] = cfg["server_header"]
if "response_code" in cfg:
fragment["environment"]["RESPONSE_CODE"] = str(cfg["response_code"])
if "fake_app" in cfg:
fragment["environment"]["FAKE_APP"] = cfg["fake_app"]
if "extra_headers" in cfg:
val = cfg["extra_headers"]
fragment["environment"]["EXTRA_HEADERS"] = (
json.dumps(val) if isinstance(val, dict) else val
)
if "custom_body" in cfg:
fragment["environment"]["CUSTOM_BODY"] = cfg["custom_body"]
if "files" in cfg:
files_path = str(Path(cfg["files"]).resolve())
fragment["environment"]["FILES_DIR"] = "/opt/html_files"
fragment.setdefault("volumes", []).append(f"{files_path}:/opt/html_files:ro")
if "tls_cert" in cfg:
fragment["environment"]["TLS_CERT"] = cfg["tls_cert"]
if "tls_key" in cfg:
fragment["environment"]["TLS_KEY"] = cfg["tls_key"]
if "tls_cn" in cfg:
fragment["environment"]["TLS_CN"] = cfg["tls_cn"]
return fragment
def dockerfile_context(self) -> Path | None:
return TEMPLATES_DIR

View File

@@ -1,41 +0,0 @@
from pathlib import Path
from decnet.services.base import BaseService
TEMPLATES_DIR = Path(__file__).parent.parent.parent / "templates" / "sniffer"
class SnifferService(BaseService):
"""
Passive network sniffer deployed alongside deckies on the MACVLAN.
Captures TLS handshakes in promiscuous mode and extracts JA3/JA3S hashes
plus connection metadata. Requires NET_RAW + NET_ADMIN capabilities.
No inbound ports — purely passive.
"""
name = "sniffer"
ports: list[int] = []
default_image = "build"
fleet_singleton = True
def compose_fragment(
self,
decky_name: str,
log_target: str | None = None,
service_cfg: dict | None = None,
) -> dict:
fragment: dict = {
"build": {"context": str(TEMPLATES_DIR)},
"container_name": f"{decky_name}-sniffer",
"restart": "unless-stopped",
"cap_add": ["NET_RAW", "NET_ADMIN"],
"environment": {
"NODE_NAME": decky_name,
},
}
if log_target:
fragment["environment"]["LOG_TARGET"] = log_target
return fragment
def dockerfile_context(self) -> Path | None:
return TEMPLATES_DIR

View File

@@ -1,11 +0,0 @@
"""
Fleet-wide MACVLAN sniffer microservice.
Runs as a single host-side background task (not per-decky) that sniffs
all TLS traffic on the MACVLAN interface, extracts fingerprints, and
feeds events into the existing log pipeline.
"""
from decnet.sniffer.worker import sniffer_worker
__all__ = ["sniffer_worker"]

File diff suppressed because it is too large Load Diff

View File

@@ -1,238 +0,0 @@
"""
Passive OS fingerprinting (p0f-lite) for the DECNET sniffer.
Pure-Python lookup module. Given the values of an incoming TCP SYN packet
(TTL, window, MSS, window-scale, and TCP option ordering), returns a coarse
OS bucket (linux / windows / macos_ios / freebsd / openbsd / nmap / unknown)
plus derived hop distance and inferred initial TTL.
Rationale
---------
Full p0f v3 distinguishes several dozen OS/tool profiles by combining dozens
of low-level quirks (OLEN, WSIZE, EOL padding, PCLASS, quirks, payload class).
For DECNET we only need a coarse bucket — enough to tag an attacker as
"linux beacon" vs "windows interactive" vs "active scan". The curated
table below covers default stacks that dominate real-world attacker traffic.
References (public p0f v3 DB, nmap-os-db, and Mozilla OS Fingerprint table):
https://github.com/p0f/p0f/blob/master/p0f.fp
No external dependencies.
"""
from __future__ import annotations
from decnet.telemetry import traced as _traced
# ─── TTL → initial TTL bucket ───────────────────────────────────────────────
# Common "hop 0" TTLs. Packets decrement TTL once per hop, so we round up
# the observed TTL to the nearest known starting value.
_TTL_BUCKETS: tuple[int, ...] = (32, 64, 128, 255)
def initial_ttl(ttl: int) -> int:
"""
Round *ttl* up to the nearest known initial-TTL bucket.
A SYN with TTL=59 was almost certainly emitted by a Linux/BSD host
(initial 64) five hops away; TTL=120 by a Windows host (initial 128)
eight hops away.
"""
for bucket in _TTL_BUCKETS:
if ttl <= bucket:
return bucket
return 255
def hop_distance(ttl: int) -> int:
"""
Estimate hops between the attacker and the sniffer based on TTL.
Upper-bounded at 64 (anything further has most likely been mangled
by a misconfigured firewall or a TTL-spoofing NAT).
"""
dist = initial_ttl(ttl) - ttl
if dist < 0:
return 0
if dist > 64:
return 64
return dist
# ─── OS signature table (TTL bucket, window, MSS, wscale, option-order) ─────
# Each entry is a set of loose predicates. If all predicates match, the
# OS label is returned. First-match wins. `None` means "don't care".
#
# The option signatures use the short-code alphabet from
# decnet/prober/tcpfp.py :: _OPT_CODES (M=MSS, N=NOP, W=WScale,
# T=Timestamp, S=SAckOK, E=EOL).
_SIGNATURES: tuple[tuple[dict, str], ...] = (
# ── nmap -sS / -sT default probe ───────────────────────────────────────
# nmap crafts very distinctive SYNs: tiny window (1024/4096/etc.), full
# option set including WScale=10 and SAckOK. Match these first so they
# don't get misclassified as Linux.
(
{
"ttl_bucket": 64,
"window_in": {1024, 2048, 3072, 4096, 31337, 32768, 65535},
"mss": 1460,
"wscale": 10,
"options": "M,W,T,S,S",
},
"nmap",
),
(
{
"ttl_bucket": 64,
"window_in": {1024, 2048, 3072, 4096, 31337, 32768, 65535},
"options_starts_with": "M,W,T,S",
},
"nmap",
),
# ── macOS / iOS default SYN (match before Linux — shares TTL 64) ──────
# TTL 64, window 65535, MSS 1460, WScale 6, specific option order
# M,N,W,N,N,T,S,E (Darwin signature with EOL padding).
(
{
"ttl_bucket": 64,
"window": 65535,
"wscale": 6,
"options": "M,N,W,N,N,T,S,E",
},
"macos_ios",
),
(
{
"ttl_bucket": 64,
"window_in": {65535},
"wscale_in": {5, 6},
"has_timestamps": True,
"options_ends_with": "E",
},
"macos_ios",
),
# ── FreeBSD default SYN (TTL 64, no EOL) ───────────────────────────────
(
{
"ttl_bucket": 64,
"window": 65535,
"wscale": 6,
"has_sack": True,
"has_timestamps": True,
"options_no_eol": True,
},
"freebsd",
),
# ── Linux (kernel 3.x 6.x) default SYN ───────────────────────────────
# TTL 64, window 29200 / 64240 / 65535, MSS 1460, WScale 7, full options.
(
{
"ttl_bucket": 64,
"window_min": 5000,
"wscale_in": {6, 7, 8, 9, 10, 11, 12, 13, 14},
"has_sack": True,
"has_timestamps": True,
},
"linux",
),
# ── OpenBSD default SYN ─────────────────────────────────────────────────
# TTL 64, window 16384, WScale 3-6, MSS 1460
(
{
"ttl_bucket": 64,
"window_in": {16384, 16960},
"wscale_in": {3, 4, 5, 6},
},
"openbsd",
),
# ── Windows 10/11/Server default SYN ────────────────────────────────────
# TTL 128, window 64240/65535, MSS 1460, WScale 8, SACK+TS
(
{
"ttl_bucket": 128,
"window_min": 8192,
"wscale_in": {2, 6, 7, 8},
"has_sack": True,
},
"windows",
),
# ── Windows 7/XP (legacy) ───────────────────────────────────────────────
(
{
"ttl_bucket": 128,
"window_in": {8192, 16384, 65535},
},
"windows",
),
# ── Embedded / Cisco / network gear ─────────────────────────────────────
(
{
"ttl_bucket": 255,
},
"embedded",
),
)
def _match_signature(
sig: dict,
ttl: int,
window: int,
mss: int,
wscale: int | None,
options_sig: str,
) -> bool:
"""Evaluate every predicate in *sig* against the observed values."""
tb = initial_ttl(ttl)
if "ttl_bucket" in sig and sig["ttl_bucket"] != tb:
return False
if "window" in sig and sig["window"] != window:
return False
if "window_in" in sig and window not in sig["window_in"]:
return False
if "window_min" in sig and window < sig["window_min"]:
return False
if "mss" in sig and sig["mss"] != mss:
return False
if "wscale" in sig and sig["wscale"] != wscale:
return False
if "wscale_in" in sig and wscale not in sig["wscale_in"]:
return False
if "has_sack" in sig:
if sig["has_sack"] != ("S" in options_sig):
return False
if "has_timestamps" in sig:
if sig["has_timestamps"] != ("T" in options_sig):
return False
if "options" in sig and sig["options"] != options_sig:
return False
if "options_starts_with" in sig and not options_sig.startswith(sig["options_starts_with"]):
return False
if "options_ends_with" in sig and not options_sig.endswith(sig["options_ends_with"]):
return False
if "options_no_eol" in sig and sig["options_no_eol"] and "E" in options_sig:
return False
return True
@_traced("sniffer.p0f_guess_os")
def guess_os(
ttl: int,
window: int,
mss: int = 0,
wscale: int | None = None,
options_sig: str = "",
) -> str:
"""
Return a coarse OS bucket for the given SYN characteristics.
One of: "linux", "windows", "macos_ios", "freebsd", "openbsd",
"embedded", "nmap", "unknown".
"""
for sig, label in _SIGNATURES:
if _match_signature(sig, ttl, window, mss, wscale, options_sig):
return label
return "unknown"

View File

@@ -1,71 +0,0 @@
"""
RFC 5424 syslog formatting and log-file writing for the fleet sniffer.
Reuses the same wire format as templates/sniffer/decnet_logging.py so the
existing collector parser and ingester can consume events without changes.
"""
import json
from datetime import datetime, timezone
from pathlib import Path
from typing import Any
from decnet.collector.worker import parse_rfc5424
from decnet.telemetry import traced as _traced
# ─── Constants (must match templates/sniffer/decnet_logging.py) ──────────────
_FACILITY_LOCAL0 = 16
_SD_ID = "decnet@55555"
_NILVALUE = "-"
SEVERITY_INFO = 6
SEVERITY_WARNING = 4
_MAX_HOSTNAME = 255
_MAX_APPNAME = 48
_MAX_MSGID = 32
# ─── Formatter ───────────────────────────────────────────────────────────────
def _sd_escape(value: str) -> str:
return value.replace("\\", "\\\\").replace('"', '\\"').replace("]", "\\]")
def _sd_element(fields: dict[str, Any]) -> str:
if not fields:
return _NILVALUE
params = " ".join(f'{k}="{_sd_escape(str(v))}"' for k, v in fields.items())
return f"[{_SD_ID} {params}]"
def syslog_line(
service: str,
hostname: str,
event_type: str,
severity: int = SEVERITY_INFO,
msg: str | None = None,
**fields: Any,
) -> str:
pri = f"<{_FACILITY_LOCAL0 * 8 + severity}>"
ts = datetime.now(timezone.utc).isoformat()
host = (hostname or _NILVALUE)[:_MAX_HOSTNAME]
appname = (service or _NILVALUE)[:_MAX_APPNAME]
msgid = (event_type or _NILVALUE)[:_MAX_MSGID]
sd = _sd_element(fields)
message = f" {msg}" if msg else ""
return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}"
@_traced("sniffer.write_event")
def write_event(line: str, log_path: Path, json_path: Path) -> None:
"""Append a syslog line to the raw log and its parsed JSON to the json log."""
with open(log_path, "a", encoding="utf-8") as lf:
lf.write(line + "\n")
lf.flush()
parsed = parse_rfc5424(line)
if parsed:
with open(json_path, "a", encoding="utf-8") as jf:
jf.write(json.dumps(parsed) + "\n")
jf.flush()

View File

@@ -1,160 +0,0 @@
"""
Fleet-wide MACVLAN sniffer worker.
Runs as a single host-side async background task that sniffs all TLS
traffic on the MACVLAN host interface. Maps packets to deckies by IP
and feeds fingerprint events into the existing log pipeline.
Modeled on decnet.collector.worker — same lifecycle pattern.
Fault-isolated: any exception is logged and the worker exits cleanly.
The API never depends on this worker being alive.
"""
import asyncio
import os
import subprocess # nosec B404 — needed for interface checks
import threading
from concurrent.futures import ThreadPoolExecutor
from pathlib import Path
from decnet.logging import get_logger
from decnet.network import HOST_MACVLAN_IFACE
from decnet.sniffer.fingerprint import SnifferEngine
from decnet.sniffer.syslog import write_event
from decnet.telemetry import traced as _traced
logger = get_logger("sniffer")
_IP_MAP_REFRESH_INTERVAL: float = 60.0
def _load_ip_to_decky() -> dict[str, str]:
"""Build IP → decky-name mapping from decnet-state.json."""
from decnet.config import load_state
state = load_state()
if state is None:
return {}
config, _ = state
mapping: dict[str, str] = {}
for decky in config.deckies:
mapping[decky.ip] = decky.name
return mapping
def _interface_exists(iface: str) -> bool:
"""Check if a network interface exists on this host."""
try:
result = subprocess.run( # nosec B603 B607 — hardcoded args
["ip", "link", "show", iface],
capture_output=True, text=True, check=False,
)
return result.returncode == 0
except Exception:
return False
@_traced("sniffer.sniff_loop")
def _sniff_loop(
interface: str,
log_path: Path,
json_path: Path,
stop_event: threading.Event,
) -> None:
"""Blocking sniff loop. Runs in a dedicated thread via asyncio.to_thread."""
try:
from scapy.sendrecv import sniff
except ImportError:
logger.error("scapy not installed — sniffer cannot start")
return
ip_map = _load_ip_to_decky()
if not ip_map:
logger.warning("sniffer: no deckies in state — nothing to sniff")
return
def _write_fn(line: str) -> None:
write_event(line, log_path, json_path)
engine = SnifferEngine(ip_to_decky=ip_map, write_fn=_write_fn)
# Periodically refresh IP map in a background daemon thread
def _refresh_loop() -> None:
while not stop_event.is_set():
stop_event.wait(_IP_MAP_REFRESH_INTERVAL)
if stop_event.is_set():
break
try:
new_map = _load_ip_to_decky()
if new_map:
engine.update_ip_map(new_map)
except Exception as exc:
logger.debug("sniffer: ip map refresh failed: %s", exc)
refresh_thread = threading.Thread(target=_refresh_loop, daemon=True)
refresh_thread.start()
logger.info("sniffer: sniffing on interface=%s deckies=%d", interface, len(ip_map))
try:
sniff(
iface=interface,
filter="tcp",
prn=engine.on_packet,
store=False,
stop_filter=lambda pkt: stop_event.is_set(),
)
except Exception as exc:
logger.error("sniffer: scapy sniff exited: %s", exc)
finally:
stop_event.set()
logger.info("sniffer: sniff loop ended")
@_traced("sniffer.worker")
async def sniffer_worker(log_file: str) -> None:
"""
Async entry point — started as asyncio.create_task in the API lifespan.
Fully fault-isolated: catches all exceptions, logs them, and returns
cleanly. The API continues running regardless of sniffer state.
"""
try:
interface = os.environ.get("DECNET_SNIFFER_IFACE", HOST_MACVLAN_IFACE)
if not _interface_exists(interface):
logger.warning(
"sniffer: interface %s not found — sniffer disabled "
"(fleet may not be deployed yet)", interface,
)
return
log_path = Path(log_file)
json_path = log_path.with_suffix(".json")
log_path.parent.mkdir(parents=True, exist_ok=True)
stop_event = threading.Event()
# Dedicated thread pool so the long-running sniff loop doesn't
# occupy a slot in the default asyncio executor.
sniffer_pool = ThreadPoolExecutor(
max_workers=2, thread_name_prefix="decnet-sniffer",
)
try:
loop = asyncio.get_running_loop()
await loop.run_in_executor(
sniffer_pool, _sniff_loop,
interface, log_path, json_path, stop_event,
)
except asyncio.CancelledError:
logger.info("sniffer: shutdown requested")
stop_event.set()
sniffer_pool.shutdown(wait=False)
raise
finally:
sniffer_pool.shutdown(wait=False)
except asyncio.CancelledError:
raise
except Exception as exc:
logger.error("sniffer: worker failed — API continues without sniffing: %s", exc)

View File

@@ -1,308 +0,0 @@
"""
DECNET OpenTelemetry tracing integration.
Controlled entirely by ``DECNET_DEVELOPER_TRACING``. When disabled (the
default), every public export is a zero-cost no-op: no OTEL SDK imports, no
monkey-patching, no middleware, and ``@traced`` returns the original function
object unwrapped.
"""
from __future__ import annotations
import asyncio
import functools
import inspect
from typing import Any, Callable, TypeVar, overload
from decnet.env import DECNET_DEVELOPER_TRACING, DECNET_OTEL_ENDPOINT
from decnet.logging import get_logger
log = get_logger("api")
F = TypeVar("F", bound=Callable[..., Any])
_ENABLED: bool = DECNET_DEVELOPER_TRACING
# ---------------------------------------------------------------------------
# Lazy OTEL imports — only when tracing is enabled
# ---------------------------------------------------------------------------
_tracer_provider: Any = None # TracerProvider | None
def _init_provider() -> None:
"""Initialise the global TracerProvider (called once from setup_tracing)."""
global _tracer_provider
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
resource = Resource.create({
"service.name": "decnet",
"service.version": "0.2.0",
})
_tracer_provider = TracerProvider(resource=resource)
exporter = OTLPSpanExporter(endpoint=DECNET_OTEL_ENDPOINT, insecure=True)
_tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(_tracer_provider)
log.info("OTEL tracing enabled endpoint=%s", DECNET_OTEL_ENDPOINT)
def setup_tracing(app: Any) -> None:
"""Configure the OTEL TracerProvider and instrument FastAPI.
Call once from the FastAPI lifespan, after DB init. No-op when
``DECNET_DEVELOPER_TRACING`` is not ``"true"``.
"""
if not _ENABLED:
return
try:
_init_provider()
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
FastAPIInstrumentor.instrument_app(app)
from decnet.logging import enable_trace_context
enable_trace_context()
log.info("FastAPI auto-instrumentation active, log-trace correlation enabled")
except Exception as exc:
log.warning("OTEL setup failed — continuing without tracing: %s", exc)
def shutdown_tracing() -> None:
"""Flush and shut down the tracer provider. Safe to call when disabled."""
if _tracer_provider is not None:
try:
_tracer_provider.shutdown()
except Exception: # nosec B110 — best-effort tracer shutdown
pass
# ---------------------------------------------------------------------------
# get_tracer — mirrors get_logger(component) pattern
# ---------------------------------------------------------------------------
class _NoOpSpan:
"""Minimal stand-in so ``with get_tracer(...).start_as_current_span(...)``
works when tracing is disabled."""
def set_attribute(self, key: str, value: Any) -> None:
pass
def set_status(self, *args: Any, **kwargs: Any) -> None:
pass
def record_exception(self, exc: BaseException) -> None:
pass
def __enter__(self) -> "_NoOpSpan":
return self
def __exit__(self, *args: Any) -> None:
pass
class _NoOpTracer:
"""Returned by ``get_tracer()`` when tracing is disabled."""
def start_as_current_span(self, name: str, **kwargs: Any) -> _NoOpSpan:
return _NoOpSpan()
def start_span(self, name: str, **kwargs: Any) -> _NoOpSpan:
return _NoOpSpan()
_tracers: dict[str, Any] = {}
def get_tracer(component: str) -> Any:
"""Return an OTEL Tracer (or a no-op stand-in) for *component*."""
if not _ENABLED:
return _NoOpTracer()
if component not in _tracers:
from opentelemetry import trace
_tracers[component] = trace.get_tracer(f"decnet.{component}")
return _tracers[component]
# ---------------------------------------------------------------------------
# @traced decorator — async + sync, zero overhead when disabled
# ---------------------------------------------------------------------------
@overload
def traced(fn: F) -> F: ...
@overload
def traced(name: str) -> Callable[[F], F]: ...
def traced(fn: Any = None, *, name: str | None = None) -> Any:
"""Decorator that wraps a function in an OTEL span.
Usage::
@traced # span name = "module.func"
async def my_worker(): ...
@traced("custom.span.name") # explicit span name
def my_sync_func(): ...
When ``DECNET_DEVELOPER_TRACING`` is disabled the original function is
returned **unwrapped** — zero overhead on every call.
"""
# Handle @traced("name") vs @traced vs @traced(name="name")
if fn is None and name is not None:
# Called as @traced("name") or @traced(name="name")
def decorator(f: F) -> F:
return _wrap(f, name)
return decorator
if fn is not None and isinstance(fn, str):
# Called as @traced("name") — fn is actually the name string
span_name = fn
def decorator(f: F) -> F:
return _wrap(f, span_name)
return decorator
if fn is not None and callable(fn):
# Called as @traced (no arguments)
return _wrap(fn, None)
# Fallback: @traced() with no args
def decorator(f: F) -> F:
return _wrap(f, name)
return decorator
def _wrap(fn: F, span_name: str | None) -> F:
"""Wrap *fn* in a span. Returns *fn* unchanged when tracing is off."""
if not _ENABLED:
return fn
resolved_name = span_name or f"{fn.__module__.rsplit('.', 1)[-1]}.{fn.__qualname__}"
if inspect.iscoroutinefunction(fn):
@functools.wraps(fn)
async def async_wrapper(*args: Any, **kwargs: Any) -> Any:
tracer = get_tracer(fn.__module__.split(".")[-1])
with tracer.start_as_current_span(resolved_name) as span:
try:
result = await fn(*args, **kwargs)
return result
except Exception as exc:
span.record_exception(exc)
raise
return async_wrapper # type: ignore[return-value]
else:
@functools.wraps(fn)
def sync_wrapper(*args: Any, **kwargs: Any) -> Any:
tracer = get_tracer(fn.__module__.split(".")[-1])
with tracer.start_as_current_span(resolved_name) as span:
try:
result = fn(*args, **kwargs)
return result
except Exception as exc:
span.record_exception(exc)
raise
return sync_wrapper # type: ignore[return-value]
# ---------------------------------------------------------------------------
# TracedRepository — proxy wrapper for BaseRepository
# ---------------------------------------------------------------------------
def wrap_repository(repo: Any) -> Any:
"""Wrap *repo* in a dynamic tracing proxy. Returns *repo* unchanged when disabled.
Instead of mirroring every method signature (which drifts when concrete
repos add extra kwargs beyond the ABC), this proxy introspects the inner
repo at construction time and wraps every public async method in a span
via ``__getattr__``. Sync attributes are forwarded directly.
"""
if not _ENABLED:
return repo
tracer = get_tracer("db")
class TracedRepository:
"""Dynamic proxy — wraps every async method call in a DB span."""
def __init__(self, inner: Any) -> None:
self._inner = inner
def __getattr__(self, name: str) -> Any:
attr = getattr(self._inner, name)
if asyncio.iscoroutinefunction(attr):
@functools.wraps(attr)
async def _traced_method(*args: Any, **kwargs: Any) -> Any:
with tracer.start_as_current_span(f"db.{name}") as span:
try:
return await attr(*args, **kwargs)
except Exception as exc:
span.record_exception(exc)
raise
return _traced_method
return attr
return TracedRepository(repo)
# ---------------------------------------------------------------------------
# Cross-stage trace context propagation
# ---------------------------------------------------------------------------
# The DECNET pipeline is decoupled via JSON files:
# collector -> .json file -> ingester -> DB -> profiler
#
# To show the full journey of an event in Jaeger, we embed W3C trace context
# into the JSON records. The collector injects it; the ingester extracts it
# and continues the trace as a child span.
def inject_context(record: dict[str, Any]) -> None:
"""Inject current OTEL trace context into *record* under ``_trace``.
No-op when tracing is disabled. The ``_trace`` key is stripped by the
ingester after extraction — it never reaches the DB.
"""
if not _ENABLED:
return
try:
from opentelemetry.propagate import inject
carrier: dict[str, str] = {}
inject(carrier)
if carrier:
record["_trace"] = carrier
except Exception: # nosec B110 — trace injection is optional
pass
def extract_context(record: dict[str, Any]) -> Any:
"""Extract OTEL trace context from *record* and return it.
Returns ``None`` when tracing is disabled or no context is present.
Removes the ``_trace`` key from the record so it doesn't leak into the DB.
"""
if not _ENABLED:
record.pop("_trace", None)
return None
try:
carrier = record.pop("_trace", None)
if not carrier:
return None
from opentelemetry.propagate import extract
return extract(carrier)
except Exception:
return None
def start_span_with_context(tracer: Any, name: str, context: Any = None) -> Any:
"""Start a span, optionally as a child of an extracted context.
Returns a context manager span. When *context* is ``None``, creates a
root span (normal behavior).
"""
if not _ENABLED:
return _NoOpSpan()
if context is not None:
return tracer.start_as_current_span(name, context=context)
return tracer.start_as_current_span(name)

View File

@@ -1,4 +1,5 @@
import asyncio import asyncio
import logging
import os import os
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from typing import Any, AsyncGenerator, Optional from typing import Any, AsyncGenerator, Optional
@@ -9,40 +10,24 @@ from fastapi.responses import JSONResponse
from pydantic import ValidationError from pydantic import ValidationError
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from decnet.env import DECNET_CORS_ORIGINS, DECNET_DEVELOPER, DECNET_EMBED_PROFILER, DECNET_INGEST_LOG_FILE from decnet.env import DECNET_CORS_ORIGINS, DECNET_DEVELOPER, DECNET_INGEST_LOG_FILE
from decnet.logging import get_logger
from decnet.web.dependencies import repo from decnet.web.dependencies import repo
from decnet.collector import log_collector_worker from decnet.collector import log_collector_worker
from decnet.web.ingester import log_ingestion_worker from decnet.web.ingester import log_ingestion_worker
from decnet.profiler import attacker_profile_worker
from decnet.web.router import api_router from decnet.web.router import api_router
log = get_logger("api") log = logging.getLogger(__name__)
ingestion_task: Optional[asyncio.Task[Any]] = None ingestion_task: Optional[asyncio.Task[Any]] = None
collector_task: Optional[asyncio.Task[Any]] = None collector_task: Optional[asyncio.Task[Any]] = None
attacker_task: Optional[asyncio.Task[Any]] = None
sniffer_task: Optional[asyncio.Task[Any]] = None
def get_background_tasks() -> dict[str, Optional[asyncio.Task[Any]]]:
"""Expose background task handles for the health endpoint."""
return {
"ingestion_worker": ingestion_task,
"collector_worker": collector_task,
"attacker_worker": attacker_task,
"sniffer_worker": sniffer_task,
}
@asynccontextmanager @asynccontextmanager
async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]: async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
global ingestion_task, collector_task, attacker_task, sniffer_task global ingestion_task, collector_task
log.info("API startup initialising database")
for attempt in range(1, 6): for attempt in range(1, 6):
try: try:
await repo.initialize() await repo.initialize()
log.debug("API startup DB initialised attempt=%d", attempt)
break break
except Exception as exc: except Exception as exc:
log.warning("DB init attempt %d/5 failed: %s", attempt, exc) log.warning("DB init attempt %d/5 failed: %s", attempt, exc)
@@ -50,51 +35,25 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
log.error("DB failed to initialize after 5 attempts — startup may be degraded") log.error("DB failed to initialize after 5 attempts — startup may be degraded")
await asyncio.sleep(0.5) await asyncio.sleep(0.5)
# Conditionally enable OpenTelemetry tracing
from decnet.telemetry import setup_tracing
setup_tracing(app)
# Start background tasks only if not in contract test mode # Start background tasks only if not in contract test mode
if os.environ.get("DECNET_CONTRACT_TEST") != "true": if os.environ.get("DECNET_CONTRACT_TEST") != "true":
# Start background ingestion task # Start background ingestion task
if ingestion_task is None or ingestion_task.done(): if ingestion_task is None or ingestion_task.done():
ingestion_task = asyncio.create_task(log_ingestion_worker(repo)) ingestion_task = asyncio.create_task(log_ingestion_worker(repo))
log.debug("API startup ingest worker started")
# Start Docker log collector (writes to log file; ingester reads from it) # Start Docker log collector (writes to log file; ingester reads from it)
_log_file = os.environ.get("DECNET_INGEST_LOG_FILE", DECNET_INGEST_LOG_FILE) _log_file = os.environ.get("DECNET_INGEST_LOG_FILE", DECNET_INGEST_LOG_FILE)
if _log_file and (collector_task is None or collector_task.done()): if _log_file and (collector_task is None or collector_task.done()):
collector_task = asyncio.create_task(log_collector_worker(_log_file)) collector_task = asyncio.create_task(log_collector_worker(_log_file))
log.debug("API startup collector worker started log_file=%s", _log_file)
elif not _log_file: elif not _log_file:
log.warning("DECNET_INGEST_LOG_FILE not set — Docker log collection disabled.") log.warning("DECNET_INGEST_LOG_FILE not set — Docker log collection disabled.")
# Start attacker profile rebuild worker only when explicitly requested.
# Default is OFF because `decnet deploy` always starts a standalone
# `decnet profiler --daemon` process. Running both against the same
# DB cursor causes events to be skipped or double-processed.
if DECNET_EMBED_PROFILER:
if attacker_task is None or attacker_task.done():
attacker_task = asyncio.create_task(attacker_profile_worker(repo))
log.info("API startup: embedded profiler started (DECNET_EMBED_PROFILER=true)")
else:
log.debug("API startup: profiler not embedded — expecting standalone daemon")
# Start fleet-wide MACVLAN sniffer (fault-isolated — never crashes the API)
try:
from decnet.sniffer import sniffer_worker
if sniffer_task is None or sniffer_task.done():
sniffer_task = asyncio.create_task(sniffer_worker(_log_file))
log.debug("API startup sniffer worker started")
except Exception as exc:
log.warning("Sniffer worker failed to start — API continues without sniffing: %s", exc)
else: else:
log.info("Contract Test Mode: skipping background worker startup") log.info("Contract Test Mode: skipping background worker startup")
yield yield
log.info("API shutdown cancelling background tasks") # Shutdown background tasks
for task in (ingestion_task, collector_task, attacker_task, sniffer_task): for task in (ingestion_task, collector_task):
if task and not task.done(): if task and not task.done():
task.cancel() task.cancel()
try: try:
@@ -103,9 +62,6 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
pass pass
except Exception as exc: except Exception as exc:
log.warning("Task shutdown error: %s", exc) log.warning("Task shutdown error: %s", exc)
from decnet.telemetry import shutdown_tracing
shutdown_tracing()
log.info("API shutdown complete")
app: FastAPI = FastAPI( app: FastAPI = FastAPI(

View File

@@ -1,33 +1,18 @@
"""
Repository factory — selects a :class:`BaseRepository` implementation based on
``DECNET_DB_TYPE`` (``sqlite`` or ``mysql``).
"""
from __future__ import annotations
import os
from typing import Any from typing import Any
from decnet.env import os
from decnet.web.db.repository import BaseRepository from decnet.web.db.repository import BaseRepository
def get_repository(**kwargs: Any) -> BaseRepository: def get_repository(**kwargs: Any) -> BaseRepository:
"""Instantiate the repository implementation selected by ``DECNET_DB_TYPE``. """Factory function to instantiate the correct repository implementation based on environment."""
Keyword arguments are forwarded to the concrete implementation:
* SQLite accepts ``db_path``.
* MySQL accepts ``url`` and engine tuning knobs (``pool_size``, …).
"""
db_type = os.environ.get("DECNET_DB_TYPE", "sqlite").lower() db_type = os.environ.get("DECNET_DB_TYPE", "sqlite").lower()
if db_type == "sqlite": if db_type == "sqlite":
from decnet.web.db.sqlite.repository import SQLiteRepository from decnet.web.db.sqlite.repository import SQLiteRepository
repo = SQLiteRepository(**kwargs) return SQLiteRepository(**kwargs)
elif db_type == "mysql": elif db_type == "mysql":
from decnet.web.db.mysql.repository import MySQLRepository # Placeholder for future implementation
repo = MySQLRepository(**kwargs) # from decnet.web.db.mysql.repository import MySQLRepository
# return MySQLRepository()
raise NotImplementedError("MySQL support is planned but not yet implemented.")
else: else:
raise ValueError(f"Unsupported database type: {db_type}") raise ValueError(f"Unsupported database type: {db_type}")
from decnet.telemetry import wrap_repository
return wrap_repository(repo)

View File

@@ -1,17 +1,9 @@
from datetime import datetime, timezone from datetime import datetime, timezone
from typing import Literal, Optional, Any, List, Annotated from typing import Optional, Any, List, Annotated
from sqlalchemy import Column, Text
from sqlalchemy.dialects.mysql import MEDIUMTEXT
from sqlmodel import SQLModel, Field from sqlmodel import SQLModel, Field
from pydantic import BaseModel, ConfigDict, Field as PydanticField, BeforeValidator from pydantic import BaseModel, ConfigDict, Field as PydanticField, BeforeValidator
from decnet.models import IniContent from decnet.models import IniContent
# Use on columns that accumulate over an attacker's lifetime (commands,
# fingerprints, state blobs). TEXT on MySQL caps at 64 KiB; MEDIUMTEXT
# stretches to 16 MiB. SQLite has no fixed-width text types so Text()
# stays unchanged there.
_BIG_TEXT = Text().with_variant(MEDIUMTEXT(), "mysql")
def _normalize_null(v: Any) -> Any: def _normalize_null(v: Any) -> Any:
if isinstance(v, str) and v.lower() in ("null", "undefined", ""): if isinstance(v, str) and v.lower() in ("null", "undefined", ""):
return None return None
@@ -38,16 +30,9 @@ class Log(SQLModel, table=True):
service: str = Field(index=True) service: str = Field(index=True)
event_type: str = Field(index=True) event_type: str = Field(index=True)
attacker_ip: str = Field(index=True) attacker_ip: str = Field(index=True)
# Long-text columns — use TEXT so MySQL DDL doesn't truncate to VARCHAR(255). raw_line: str
# TEXT is equivalent to plain text in SQLite. fields: str
raw_line: str = Field(sa_column=Column("raw_line", Text, nullable=False)) msg: Optional[str] = None
fields: str = Field(sa_column=Column("fields", Text, nullable=False))
msg: Optional[str] = Field(default=None, sa_column=Column("msg", Text, nullable=True))
# OTEL trace context — bridges the collector→ingester trace to the SSE
# read path. Nullable so pre-existing rows and non-traced deployments
# are unaffected.
trace_id: Optional[str] = Field(default=None)
span_id: Optional[str] = Field(default=None)
class Bounty(SQLModel, table=True): class Bounty(SQLModel, table=True):
__tablename__ = "bounty" __tablename__ = "bounty"
@@ -57,86 +42,13 @@ class Bounty(SQLModel, table=True):
service: str = Field(index=True) service: str = Field(index=True)
attacker_ip: str = Field(index=True) attacker_ip: str = Field(index=True)
bounty_type: str = Field(index=True) bounty_type: str = Field(index=True)
payload: str = Field(sa_column=Column("payload", Text, nullable=False)) payload: str
class State(SQLModel, table=True): class State(SQLModel, table=True):
__tablename__ = "state" __tablename__ = "state"
key: str = Field(primary_key=True) key: str = Field(primary_key=True)
# JSON-serialized DecnetConfig or other state blobs — can be large as value: str # Stores JSON serialized DecnetConfig or other state blobs
# deckies/services accumulate. MEDIUMTEXT on MySQL (16 MiB ceiling).
value: str = Field(sa_column=Column("value", _BIG_TEXT, nullable=False))
class Attacker(SQLModel, table=True):
__tablename__ = "attackers"
uuid: str = Field(primary_key=True)
ip: str = Field(index=True)
first_seen: datetime = Field(index=True)
last_seen: datetime = Field(index=True)
event_count: int = Field(default=0)
service_count: int = Field(default=0)
decky_count: int = Field(default=0)
# JSON blobs — these grow over the attacker's lifetime. Use MEDIUMTEXT on
# MySQL (16 MiB) for the fields that accumulate (fingerprints, commands,
# and the deckies/services lists that are unbounded in principle).
services: str = Field(
default="[]", sa_column=Column("services", _BIG_TEXT, nullable=False, default="[]")
) # JSON list[str]
deckies: str = Field(
default="[]", sa_column=Column("deckies", _BIG_TEXT, nullable=False, default="[]")
) # JSON list[str], first-contact ordered
traversal_path: Optional[str] = Field(
default=None, sa_column=Column("traversal_path", Text, nullable=True)
) # "decky-01 → decky-03 → decky-05"
is_traversal: bool = Field(default=False)
bounty_count: int = Field(default=0)
credential_count: int = Field(default=0)
fingerprints: str = Field(
default="[]", sa_column=Column("fingerprints", _BIG_TEXT, nullable=False, default="[]")
) # JSON list[dict] — bounty fingerprints
commands: str = Field(
default="[]", sa_column=Column("commands", _BIG_TEXT, nullable=False, default="[]")
) # JSON list[dict] — commands per service/decky
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc), index=True
)
class AttackerBehavior(SQLModel, table=True):
"""
Timing & behavioral profile for an attacker, joined to Attacker by uuid.
Kept in a separate table so the core Attacker row stays narrow and
behavior data can be updated independently (e.g. as the sniffer observes
more packets) without touching the event-count aggregates.
"""
__tablename__ = "attacker_behavior"
attacker_uuid: str = Field(primary_key=True, foreign_key="attackers.uuid")
# OS / TCP stack fingerprint (rolled up from sniffer events)
os_guess: Optional[str] = None
hop_distance: Optional[int] = None
tcp_fingerprint: str = Field(
default="{}",
sa_column=Column("tcp_fingerprint", Text, nullable=False, default="{}"),
) # JSON: window, wscale, mss, options_sig
retransmit_count: int = Field(default=0)
# Behavioral (derived by the profiler from log-event timing)
behavior_class: Optional[str] = None # beaconing | interactive | scanning | brute_force | slow_scan | mixed | unknown
beacon_interval_s: Optional[float] = None
beacon_jitter_pct: Optional[float] = None
tool_guesses: Optional[str] = None # JSON list[str] — all matched tools
timing_stats: str = Field(
default="{}",
sa_column=Column("timing_stats", Text, nullable=False, default="{}"),
) # JSON: mean/median/stdev/min/max IAT
phase_sequence: str = Field(
default="{}",
sa_column=Column("phase_sequence", Text, nullable=False, default="{}"),
) # JSON: recon_end/exfil_start/latency
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc), index=True
)
# --- API Request/Response Models (Pydantic) --- # --- API Request/Response Models (Pydantic) ---
@@ -165,12 +77,6 @@ class BountyResponse(BaseModel):
offset: int offset: int
data: List[dict[str, Any]] data: List[dict[str, Any]]
class AttackersResponse(BaseModel):
total: int
limit: int
offset: int
data: List[dict[str, Any]]
class StatsResponse(BaseModel): class StatsResponse(BaseModel):
total_logs: int total_logs: int
unique_attackers: int unique_attackers: int
@@ -187,47 +93,3 @@ class DeployIniRequest(BaseModel):
# This field now enforces strict INI structure during Pydantic initialization. # This field now enforces strict INI structure during Pydantic initialization.
# The OpenAPI schema correctly shows it as a required string. # The OpenAPI schema correctly shows it as a required string.
ini_content: IniContent = PydanticField(..., description="A valid INI formatted string") ini_content: IniContent = PydanticField(..., description="A valid INI formatted string")
# --- Configuration Models ---
class CreateUserRequest(BaseModel):
username: str = PydanticField(..., min_length=1, max_length=64)
password: str = PydanticField(..., min_length=8, max_length=72)
role: Literal["admin", "viewer"] = "viewer"
class UpdateUserRoleRequest(BaseModel):
role: Literal["admin", "viewer"]
class ResetUserPasswordRequest(BaseModel):
new_password: str = PydanticField(..., min_length=8, max_length=72)
class DeploymentLimitRequest(BaseModel):
deployment_limit: int = PydanticField(..., ge=1, le=500)
class GlobalMutationIntervalRequest(BaseModel):
global_mutation_interval: str = PydanticField(..., pattern=r"^[1-9]\d*[mdMyY]$")
class UserResponse(BaseModel):
uuid: str
username: str
role: str
must_change_password: bool
class ConfigResponse(BaseModel):
role: str
deployment_limit: int
global_mutation_interval: str
class AdminConfigResponse(ConfigResponse):
users: List[UserResponse]
class ComponentHealth(BaseModel):
status: Literal["ok", "failing"]
detail: Optional[str] = None
class HealthResponse(BaseModel):
status: Literal["healthy", "degraded", "unhealthy"]
components: dict[str, ComponentHealth]

View File

@@ -1,98 +0,0 @@
"""
MySQL async engine factory.
Builds a SQLAlchemy AsyncEngine against MySQL using the ``aiomysql`` driver.
Connection info is resolved (in order of precedence):
1. An explicit ``url`` argument passed to :func:`get_async_engine`
2. ``DECNET_DB_URL`` — full SQLAlchemy URL
3. Component env vars:
``DECNET_DB_HOST`` (default ``localhost``)
``DECNET_DB_PORT`` (default ``3306``)
``DECNET_DB_NAME`` (default ``decnet``)
``DECNET_DB_USER`` (default ``decnet``)
``DECNET_DB_PASSWORD`` (default empty — raises unless pytest is running)
"""
from __future__ import annotations
import os
from typing import Optional
from urllib.parse import quote_plus
from sqlalchemy.ext.asyncio import AsyncEngine, create_async_engine
DEFAULT_POOL_SIZE = 10
DEFAULT_MAX_OVERFLOW = 20
DEFAULT_POOL_RECYCLE = 3600 # seconds — avoid MySQL ``wait_timeout`` disconnects
DEFAULT_POOL_PRE_PING = True
def build_mysql_url(
host: Optional[str] = None,
port: Optional[int] = None,
database: Optional[str] = None,
user: Optional[str] = None,
password: Optional[str] = None,
) -> str:
"""Compose an async SQLAlchemy URL for MySQL using the aiomysql driver.
Component args override env vars. Password is percent-encoded so special
characters (``@``, ``:``, ``/``…) don't break URL parsing.
"""
host = host or os.environ.get("DECNET_DB_HOST", "localhost")
port = port or int(os.environ.get("DECNET_DB_PORT", "3306"))
database = database or os.environ.get("DECNET_DB_NAME", "decnet")
user = user or os.environ.get("DECNET_DB_USER", "decnet")
if password is None:
password = os.environ.get("DECNET_DB_PASSWORD", "")
# Allow empty passwords during tests (pytest sets PYTEST_* env vars).
# Outside tests, an empty MySQL password is almost never intentional.
if not password and not any(k.startswith("PYTEST") for k in os.environ):
raise ValueError(
"DECNET_DB_PASSWORD is not set. Either export it, set DECNET_DB_URL, "
"or run under pytest for an empty-password default."
)
pw_enc = quote_plus(password)
user_enc = quote_plus(user)
return f"mysql+aiomysql://{user_enc}:{pw_enc}@{host}:{port}/{database}"
def resolve_url(url: Optional[str] = None) -> str:
"""Pick a connection URL: explicit arg → DECNET_DB_URL env → built from components."""
if url:
return url
env_url = os.environ.get("DECNET_DB_URL")
if env_url:
return env_url
return build_mysql_url()
def get_async_engine(
url: Optional[str] = None,
*,
pool_size: int = DEFAULT_POOL_SIZE,
max_overflow: int = DEFAULT_MAX_OVERFLOW,
pool_recycle: int = DEFAULT_POOL_RECYCLE,
pool_pre_ping: bool = DEFAULT_POOL_PRE_PING,
echo: bool = False,
) -> AsyncEngine:
"""Create an AsyncEngine for MySQL.
Defaults tuned for a dashboard workload: a modest pool, hourly recycle
to sidestep MySQL's idle-connection reaper, and pre-ping to fail fast
if a pooled connection has been killed server-side.
"""
dsn = resolve_url(url)
return create_async_engine(
dsn,
echo=echo,
pool_size=pool_size,
max_overflow=max_overflow,
pool_recycle=pool_recycle,
pool_pre_ping=pool_pre_ping,
)

View File

@@ -1,130 +0,0 @@
"""
MySQL implementation of :class:`BaseRepository`.
Inherits the portable SQLModel query code from :class:`SQLModelRepository`
and only overrides the two places where MySQL's SQL dialect differs from
SQLite's:
* :meth:`_migrate_attackers_table` — uses ``information_schema`` (MySQL
has no ``PRAGMA``).
* :meth:`get_log_histogram` — uses ``FROM_UNIXTIME`` /
``UNIX_TIMESTAMP`` + integer division for bucketing.
"""
from __future__ import annotations
from typing import List, Optional
from sqlalchemy import func, select, text, literal_column
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
from sqlmodel.sql.expression import SelectOfScalar
from decnet.web.db.models import Log
from decnet.web.db.mysql.database import get_async_engine
from decnet.web.db.sqlmodel_repo import SQLModelRepository
class MySQLRepository(SQLModelRepository):
"""MySQL backend — uses ``aiomysql``."""
def __init__(self, url: Optional[str] = None, **engine_kwargs) -> None:
self.engine = get_async_engine(url=url, **engine_kwargs)
self.session_factory = async_sessionmaker(
self.engine, class_=AsyncSession, expire_on_commit=False
)
async def _migrate_attackers_table(self) -> None:
"""Drop the legacy (pre-UUID) ``attackers`` table if it exists without a ``uuid`` column.
MySQL exposes column metadata via ``information_schema.COLUMNS``.
``DATABASE()`` scopes the lookup to the currently connected schema.
"""
async with self.engine.begin() as conn:
rows = (await conn.execute(text(
"SELECT COLUMN_NAME FROM information_schema.COLUMNS "
"WHERE TABLE_SCHEMA = DATABASE() AND TABLE_NAME = 'attackers'"
))).fetchall()
if rows and not any(r[0] == "uuid" for r in rows):
await conn.execute(text("DROP TABLE attackers"))
async def _migrate_column_types(self) -> None:
"""Upgrade TEXT → MEDIUMTEXT for columns that accumulate large JSON blobs.
``create_all()`` never alters existing columns, so tables created before
``_BIG_TEXT`` was introduced keep their 64 KiB ``TEXT`` cap. This method
inspects ``information_schema`` and issues ``ALTER TABLE … MODIFY COLUMN``
for each offending column found.
"""
targets: dict[str, dict[str, str]] = {
"attackers": {
"commands": "MEDIUMTEXT NOT NULL DEFAULT '[]'",
"fingerprints": "MEDIUMTEXT NOT NULL DEFAULT '[]'",
"services": "MEDIUMTEXT NOT NULL DEFAULT '[]'",
"deckies": "MEDIUMTEXT NOT NULL DEFAULT '[]'",
},
"state": {
"value": "MEDIUMTEXT NOT NULL",
},
}
async with self.engine.begin() as conn:
rows = (await conn.execute(text(
"SELECT TABLE_NAME, COLUMN_NAME FROM information_schema.COLUMNS "
"WHERE TABLE_SCHEMA = DATABASE() "
" AND TABLE_NAME IN ('attackers', 'state') "
" AND COLUMN_NAME IN ('commands','fingerprints','services','deckies','value') "
" AND DATA_TYPE = 'text'"
))).fetchall()
for table_name, col_name in rows:
spec = targets.get(table_name, {}).get(col_name)
if spec:
await conn.execute(text(
f"ALTER TABLE `{table_name}` MODIFY COLUMN `{col_name}` {spec}"
))
async def initialize(self) -> None:
"""Create tables and run all MySQL-specific migrations."""
from sqlmodel import SQLModel
await self._migrate_attackers_table()
await self._migrate_column_types()
async with self.engine.begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all)
await self._ensure_admin_user()
def _json_field_equals(self, key: str):
# MySQL 5.7+ exposes JSON_EXTRACT; quoted string result returned for
# TEXT-stored JSON, same behavior we rely on in SQLite.
return text(f"JSON_UNQUOTE(JSON_EXTRACT(fields, '$.{key}')) = :val")
async def get_log_histogram(
self,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
interval_minutes: int = 15,
) -> List[dict]:
bucket_seconds = max(interval_minutes, 1) * 60
# Truncate each timestamp to the start of its bucket:
# FROM_UNIXTIME( (UNIX_TIMESTAMP(timestamp) DIV N) * N )
# DIV is MySQL's integer division operator.
bucket_expr = literal_column(
f"FROM_UNIXTIME((UNIX_TIMESTAMP(timestamp) DIV {bucket_seconds}) * {bucket_seconds})"
).label("bucket_time")
statement: SelectOfScalar = select(bucket_expr, func.count().label("count")).select_from(Log)
statement = self._apply_filters(statement, search, start_time, end_time)
statement = statement.group_by(literal_column("bucket_time")).order_by(
literal_column("bucket_time")
)
async with self.session_factory() as session:
results = await session.execute(statement)
# Normalize to ISO string for API parity with the SQLite backend
# (SQLite's datetime() returns a string already; FROM_UNIXTIME
# returns a datetime).
out: List[dict] = []
for r in results.all():
ts = r[0]
out.append({
"time": ts.isoformat(sep=" ") if hasattr(ts, "isoformat") else ts,
"count": r[1],
})
return out

View File

@@ -60,26 +60,6 @@ class BaseRepository(ABC):
"""Update a user's password and change the must_change_password flag.""" """Update a user's password and change the must_change_password flag."""
pass pass
@abstractmethod
async def list_users(self) -> list[dict[str, Any]]:
"""Retrieve all users (caller must strip password_hash before returning to clients)."""
pass
@abstractmethod
async def delete_user(self, uuid: str) -> bool:
"""Delete a user by UUID. Returns True if user was found and deleted."""
pass
@abstractmethod
async def update_user_role(self, uuid: str, role: str) -> None:
"""Update a user's role."""
pass
@abstractmethod
async def purge_logs_and_bounties(self) -> dict[str, int]:
"""Delete all logs, bounties, and attacker profiles. Returns counts of deleted rows."""
pass
@abstractmethod @abstractmethod
async def add_bounty(self, bounty_data: dict[str, Any]) -> None: async def add_bounty(self, bounty_data: dict[str, Any]) -> None:
"""Add a new harvested artifact (bounty) to the database.""" """Add a new harvested artifact (bounty) to the database."""
@@ -110,76 +90,3 @@ class BaseRepository(ABC):
async def set_state(self, key: str, value: Any) -> None: async def set_state(self, key: str, value: Any) -> None:
"""Store a specific state entry by key.""" """Store a specific state entry by key."""
pass pass
@abstractmethod
async def get_max_log_id(self) -> int:
"""Return the highest log ID, or 0 if the table is empty."""
pass
@abstractmethod
async def get_logs_after_id(self, last_id: int, limit: int = 500) -> list[dict[str, Any]]:
"""Return logs with id > last_id, ordered by id ASC, up to limit."""
pass
@abstractmethod
async def get_all_bounties_by_ip(self) -> dict[str, list[dict[str, Any]]]:
"""Retrieve all bounty rows grouped by attacker_ip."""
pass
@abstractmethod
async def get_bounties_for_ips(self, ips: set[str]) -> dict[str, list[dict[str, Any]]]:
"""Retrieve bounty rows grouped by attacker_ip, filtered to only the given IPs."""
pass
@abstractmethod
async def upsert_attacker(self, data: dict[str, Any]) -> str:
"""Insert or replace an attacker profile record. Returns the row's UUID."""
pass
@abstractmethod
async def upsert_attacker_behavior(self, attacker_uuid: str, data: dict[str, Any]) -> None:
"""Insert or replace the behavioral/fingerprint row for an attacker."""
pass
@abstractmethod
async def get_attacker_behavior(self, attacker_uuid: str) -> Optional[dict[str, Any]]:
"""Retrieve the behavioral/fingerprint row for an attacker UUID."""
pass
@abstractmethod
async def get_behaviors_for_ips(self, ips: set[str]) -> dict[str, dict[str, Any]]:
"""Bulk-fetch behavior rows keyed by attacker IP (JOIN to attackers)."""
pass
@abstractmethod
async def get_attacker_by_uuid(self, uuid: str) -> Optional[dict[str, Any]]:
"""Retrieve a single attacker profile by UUID."""
pass
@abstractmethod
async def get_attackers(
self,
limit: int = 50,
offset: int = 0,
search: Optional[str] = None,
sort_by: str = "recent",
service: Optional[str] = None,
) -> list[dict[str, Any]]:
"""Retrieve paginated attacker profile records."""
pass
@abstractmethod
async def get_total_attackers(self, search: Optional[str] = None, service: Optional[str] = None) -> int:
"""Retrieve the total count of attacker profile records, optionally filtered."""
pass
@abstractmethod
async def get_attacker_commands(
self,
uuid: str,
limit: int = 50,
offset: int = 0,
service: Optional[str] = None,
) -> dict[str, Any]:
"""Retrieve paginated commands for an attacker, optionally filtered by service."""
pass

View File

@@ -1,5 +1,5 @@
from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession, async_sessionmaker, create_async_engine from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession, async_sessionmaker, create_async_engine
from sqlalchemy import create_engine, Engine, event from sqlalchemy import create_engine, Engine
from sqlmodel import SQLModel from sqlmodel import SQLModel
from typing import AsyncGenerator from typing import AsyncGenerator
@@ -11,21 +11,7 @@ def get_async_engine(db_path: str) -> AsyncEngine:
prefix = "sqlite+aiosqlite:///" prefix = "sqlite+aiosqlite:///"
if db_path.startswith(":memory:"): if db_path.startswith(":memory:"):
prefix = "sqlite+aiosqlite://" prefix = "sqlite+aiosqlite://"
engine = create_async_engine( return create_async_engine(f"{prefix}{db_path}", echo=False, connect_args={"uri": True})
f"{prefix}{db_path}",
echo=False,
connect_args={"uri": True, "timeout": 30},
)
@event.listens_for(engine.sync_engine, "connect")
def _set_sqlite_pragmas(dbapi_conn, _conn_record):
cursor = dbapi_conn.cursor()
cursor.execute("PRAGMA journal_mode=WAL")
cursor.execute("PRAGMA synchronous=NORMAL")
cursor.execute("PRAGMA busy_timeout=30000")
cursor.close()
return engine
def get_sync_engine(db_path: str) -> Engine: def get_sync_engine(db_path: str) -> Engine:
prefix = "sqlite:///" prefix = "sqlite:///"

View File

@@ -1,22 +1,23 @@
from typing import List, Optional import asyncio
import json
import uuid
from datetime import datetime
from typing import Any, Optional, List
from sqlalchemy import func, select, text, literal_column from sqlalchemy import func, select, desc, asc, text, or_, update, literal_column
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
from sqlmodel.sql.expression import SelectOfScalar from sqlmodel.sql.expression import SelectOfScalar
from decnet.config import _ROOT from decnet.config import load_state, _ROOT
from decnet.web.db.models import Log from decnet.env import DECNET_ADMIN_USER, DECNET_ADMIN_PASSWORD
from decnet.web.auth import get_password_hash
from decnet.web.db.repository import BaseRepository
from decnet.web.db.models import User, Log, Bounty, State
from decnet.web.db.sqlite.database import get_async_engine from decnet.web.db.sqlite.database import get_async_engine
from decnet.web.db.sqlmodel_repo import SQLModelRepository
class SQLiteRepository(SQLModelRepository): class SQLiteRepository(BaseRepository):
"""SQLite backend — uses ``aiosqlite``. """SQLite implementation using SQLModel and SQLAlchemy Async."""
Overrides the two places where SQLite's SQL dialect differs from
MySQL/PostgreSQL: legacy-schema migration (via ``PRAGMA table_info``)
and the log-histogram bucket expression (via ``strftime`` + ``unixepoch``).
"""
def __init__(self, db_path: str = str(_ROOT / "decnet.db")) -> None: def __init__(self, db_path: str = str(_ROOT / "decnet.db")) -> None:
self.db_path = db_path self.db_path = db_path
@@ -25,16 +26,173 @@ class SQLiteRepository(SQLModelRepository):
self.engine, class_=AsyncSession, expire_on_commit=False self.engine, class_=AsyncSession, expire_on_commit=False
) )
async def _migrate_attackers_table(self) -> None: async def initialize(self) -> None:
"""Drop the old attackers table if it lacks the uuid column (pre-UUID schema).""" """Async warm-up / verification. Creates tables if they don't exist."""
from sqlmodel import SQLModel
async with self.engine.begin() as conn: async with self.engine.begin() as conn:
rows = (await conn.execute(text("PRAGMA table_info(attackers)"))).fetchall() await conn.run_sync(SQLModel.metadata.create_all)
if rows and not any(r[1] == "uuid" for r in rows):
await conn.execute(text("DROP TABLE attackers"))
def _json_field_equals(self, key: str): async with self.session_factory() as session:
# SQLite stores JSON as text; json_extract is the canonical accessor. # Check if admin exists
return text(f"json_extract(fields, '$.{key}') = :val") result = await session.execute(
select(User).where(User.username == DECNET_ADMIN_USER)
)
if not result.scalar_one_or_none():
session.add(User(
uuid=str(uuid.uuid4()),
username=DECNET_ADMIN_USER,
password_hash=get_password_hash(DECNET_ADMIN_PASSWORD),
role="admin",
must_change_password=True,
))
await session.commit()
async def reinitialize(self) -> None:
"""Initialize the database schema asynchronously (useful for tests)."""
from sqlmodel import SQLModel
async with self.engine.begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all)
async with self.session_factory() as session:
result = await session.execute(
select(User).where(User.username == DECNET_ADMIN_USER)
)
if not result.scalar_one_or_none():
session.add(User(
uuid=str(uuid.uuid4()),
username=DECNET_ADMIN_USER,
password_hash=get_password_hash(DECNET_ADMIN_PASSWORD),
role="admin",
must_change_password=True,
))
await session.commit()
# ------------------------------------------------------------------ logs
async def add_log(self, log_data: dict[str, Any]) -> None:
data = log_data.copy()
if "fields" in data and isinstance(data["fields"], dict):
data["fields"] = json.dumps(data["fields"])
if "timestamp" in data and isinstance(data["timestamp"], str):
try:
data["timestamp"] = datetime.fromisoformat(
data["timestamp"].replace("Z", "+00:00")
)
except ValueError:
pass
async with self.session_factory() as session:
session.add(Log(**data))
await session.commit()
def _apply_filters(
self,
statement: SelectOfScalar,
search: Optional[str],
start_time: Optional[str],
end_time: Optional[str],
) -> SelectOfScalar:
import re
import shlex
if start_time:
statement = statement.where(Log.timestamp >= start_time)
if end_time:
statement = statement.where(Log.timestamp <= end_time)
if search:
try:
tokens = shlex.split(search)
except ValueError:
tokens = search.split()
core_fields = {
"decky": Log.decky,
"service": Log.service,
"event": Log.event_type,
"attacker": Log.attacker_ip,
"attacker-ip": Log.attacker_ip,
"attacker_ip": Log.attacker_ip,
}
for token in tokens:
if ":" in token:
key, val = token.split(":", 1)
if key in core_fields:
statement = statement.where(core_fields[key] == val)
else:
key_safe = re.sub(r"[^a-zA-Z0-9_]", "", key)
if key_safe:
statement = statement.where(
text(f"json_extract(fields, '$.{key_safe}') = :val")
).params(val=val)
else:
lk = f"%{token}%"
statement = statement.where(
or_(
Log.raw_line.like(lk),
Log.decky.like(lk),
Log.service.like(lk),
Log.attacker_ip.like(lk),
)
)
return statement
async def get_logs(
self,
limit: int = 50,
offset: int = 0,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
) -> List[dict]:
statement = (
select(Log)
.order_by(desc(Log.timestamp))
.offset(offset)
.limit(limit)
)
statement = self._apply_filters(statement, search, start_time, end_time)
async with self.session_factory() as session:
results = await session.execute(statement)
return [log.model_dump(mode='json') for log in results.scalars().all()]
async def get_max_log_id(self) -> int:
async with self.session_factory() as session:
result = await session.execute(select(func.max(Log.id)))
val = result.scalar()
return val if val is not None else 0
async def get_logs_after_id(
self,
last_id: int,
limit: int = 50,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
) -> List[dict]:
statement = (
select(Log).where(Log.id > last_id).order_by(asc(Log.id)).limit(limit)
)
statement = self._apply_filters(statement, search, start_time, end_time)
async with self.session_factory() as session:
results = await session.execute(statement)
return [log.model_dump(mode='json') for log in results.scalars().all()]
async def get_total_logs(
self,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
) -> int:
statement = select(func.count()).select_from(Log)
statement = self._apply_filters(statement, search, start_time, end_time)
async with self.session_factory() as session:
result = await session.execute(statement)
return result.scalar() or 0
async def get_log_histogram( async def get_log_histogram(
self, self,
@@ -48,7 +206,7 @@ class SQLiteRepository(SQLModelRepository):
f"datetime((strftime('%s', timestamp) / {bucket_seconds}) * {bucket_seconds}, 'unixepoch')" f"datetime((strftime('%s', timestamp) / {bucket_seconds}) * {bucket_seconds}, 'unixepoch')"
).label("bucket_time") ).label("bucket_time")
statement: SelectOfScalar = select(bucket_expr, func.count().label("count")).select_from(Log) statement = select(bucket_expr, func.count().label("count")).select_from(Log)
statement = self._apply_filters(statement, search, start_time, end_time) statement = self._apply_filters(statement, search, start_time, end_time)
statement = statement.group_by(literal_column("bucket_time")).order_by( statement = statement.group_by(literal_column("bucket_time")).order_by(
literal_column("bucket_time") literal_column("bucket_time")
@@ -57,3 +215,164 @@ class SQLiteRepository(SQLModelRepository):
async with self.session_factory() as session: async with self.session_factory() as session:
results = await session.execute(statement) results = await session.execute(statement)
return [{"time": r[0], "count": r[1]} for r in results.all()] return [{"time": r[0], "count": r[1]} for r in results.all()]
async def get_stats_summary(self) -> dict[str, Any]:
async with self.session_factory() as session:
total_logs = (
await session.execute(select(func.count()).select_from(Log))
).scalar() or 0
unique_attackers = (
await session.execute(
select(func.count(func.distinct(Log.attacker_ip)))
)
).scalar() or 0
active_deckies = (
await session.execute(
select(func.count(func.distinct(Log.decky)))
)
).scalar() or 0
_state = await asyncio.to_thread(load_state)
deployed_deckies = len(_state[0].deckies) if _state else 0
return {
"total_logs": total_logs,
"unique_attackers": unique_attackers,
"active_deckies": active_deckies,
"deployed_deckies": deployed_deckies,
}
async def get_deckies(self) -> List[dict]:
_state = await asyncio.to_thread(load_state)
return [_d.model_dump() for _d in _state[0].deckies] if _state else []
# ------------------------------------------------------------------ users
async def get_user_by_username(self, username: str) -> Optional[dict]:
async with self.session_factory() as session:
result = await session.execute(
select(User).where(User.username == username)
)
user = result.scalar_one_or_none()
return user.model_dump() if user else None
async def get_user_by_uuid(self, uuid: str) -> Optional[dict]:
async with self.session_factory() as session:
result = await session.execute(
select(User).where(User.uuid == uuid)
)
user = result.scalar_one_or_none()
return user.model_dump() if user else None
async def create_user(self, user_data: dict[str, Any]) -> None:
async with self.session_factory() as session:
session.add(User(**user_data))
await session.commit()
async def update_user_password(
self, uuid: str, password_hash: str, must_change_password: bool = False
) -> None:
async with self.session_factory() as session:
await session.execute(
update(User)
.where(User.uuid == uuid)
.values(
password_hash=password_hash,
must_change_password=must_change_password,
)
)
await session.commit()
# ---------------------------------------------------------------- bounties
async def add_bounty(self, bounty_data: dict[str, Any]) -> None:
data = bounty_data.copy()
if "payload" in data and isinstance(data["payload"], dict):
data["payload"] = json.dumps(data["payload"])
async with self.session_factory() as session:
session.add(Bounty(**data))
await session.commit()
def _apply_bounty_filters(
self,
statement: SelectOfScalar,
bounty_type: Optional[str],
search: Optional[str]
) -> SelectOfScalar:
if bounty_type:
statement = statement.where(Bounty.bounty_type == bounty_type)
if search:
lk = f"%{search}%"
statement = statement.where(
or_(
Bounty.decky.like(lk),
Bounty.service.like(lk),
Bounty.attacker_ip.like(lk),
Bounty.payload.like(lk),
)
)
return statement
async def get_bounties(
self,
limit: int = 50,
offset: int = 0,
bounty_type: Optional[str] = None,
search: Optional[str] = None,
) -> List[dict]:
statement = (
select(Bounty)
.order_by(desc(Bounty.timestamp))
.offset(offset)
.limit(limit)
)
statement = self._apply_bounty_filters(statement, bounty_type, search)
async with self.session_factory() as session:
results = await session.execute(statement)
final = []
for item in results.scalars().all():
d = item.model_dump(mode='json')
try:
d["payload"] = json.loads(d["payload"])
except (json.JSONDecodeError, TypeError):
pass
final.append(d)
return final
async def get_total_bounties(
self, bounty_type: Optional[str] = None, search: Optional[str] = None
) -> int:
statement = select(func.count()).select_from(Bounty)
statement = self._apply_bounty_filters(statement, bounty_type, search)
async with self.session_factory() as session:
result = await session.execute(statement)
return result.scalar() or 0
async def get_state(self, key: str) -> Optional[dict[str, Any]]:
async with self.session_factory() as session:
statement = select(State).where(State.key == key)
result = await session.execute(statement)
state = result.scalar_one_or_none()
if state:
return json.loads(state.value)
return None
async def set_state(self, key: str, value: Any) -> None: # noqa: ANN401
async with self.session_factory() as session:
# Check if exists
statement = select(State).where(State.key == key)
result = await session.execute(statement)
state = result.scalar_one_or_none()
value_json = json.dumps(value)
if state:
state.value = value_json
session.add(state)
else:
new_state = State(key=key, value=value_json)
session.add(new_state)
await session.commit()

View File

@@ -1,628 +0,0 @@
"""
Shared SQLModel-based repository implementation.
Contains all dialect-portable query code used by the SQLite and MySQL
backends. Dialect-specific behavior lives in subclasses:
* engine/session construction (``__init__``)
* ``_migrate_attackers_table`` (legacy schema check; DDL introspection
is not portable)
* ``get_log_histogram`` (date-bucket expression differs per dialect)
"""
from __future__ import annotations
import asyncio
import json
import uuid
from datetime import datetime, timezone
from typing import Any, Optional, List
from sqlalchemy import func, select, desc, asc, text, or_, update
from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession, async_sessionmaker
from sqlmodel.sql.expression import SelectOfScalar
from decnet.config import load_state
from decnet.env import DECNET_ADMIN_USER, DECNET_ADMIN_PASSWORD
from decnet.web.auth import get_password_hash
from decnet.web.db.repository import BaseRepository
from decnet.web.db.models import User, Log, Bounty, State, Attacker, AttackerBehavior
class SQLModelRepository(BaseRepository):
"""Concrete SQLModel/SQLAlchemy-async repository.
Subclasses provide ``self.engine`` (AsyncEngine) and ``self.session_factory``
in ``__init__``, and override the few dialect-specific helpers.
"""
engine: AsyncEngine
session_factory: async_sessionmaker[AsyncSession]
# ------------------------------------------------------------ lifecycle
async def initialize(self) -> None:
"""Create tables if absent and seed the admin user."""
from sqlmodel import SQLModel
await self._migrate_attackers_table()
async with self.engine.begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all)
await self._ensure_admin_user()
async def reinitialize(self) -> None:
"""Re-create schema (for tests / reset flows). Does NOT drop existing tables."""
from sqlmodel import SQLModel
async with self.engine.begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all)
await self._ensure_admin_user()
async def _ensure_admin_user(self) -> None:
async with self.session_factory() as session:
result = await session.execute(
select(User).where(User.username == DECNET_ADMIN_USER)
)
if not result.scalar_one_or_none():
session.add(User(
uuid=str(uuid.uuid4()),
username=DECNET_ADMIN_USER,
password_hash=get_password_hash(DECNET_ADMIN_PASSWORD),
role="admin",
must_change_password=True,
))
await session.commit()
async def _migrate_attackers_table(self) -> None:
"""Legacy-schema cleanup. Override per dialect (DDL introspection is non-portable)."""
return None
# ---------------------------------------------------------------- logs
async def add_log(self, log_data: dict[str, Any]) -> None:
data = log_data.copy()
if "fields" in data and isinstance(data["fields"], dict):
data["fields"] = json.dumps(data["fields"])
if "timestamp" in data and isinstance(data["timestamp"], str):
try:
data["timestamp"] = datetime.fromisoformat(
data["timestamp"].replace("Z", "+00:00")
)
except ValueError:
pass
async with self.session_factory() as session:
session.add(Log(**data))
await session.commit()
def _apply_filters(
self,
statement: SelectOfScalar,
search: Optional[str],
start_time: Optional[str],
end_time: Optional[str],
) -> SelectOfScalar:
import re
import shlex
if start_time:
statement = statement.where(Log.timestamp >= start_time)
if end_time:
statement = statement.where(Log.timestamp <= end_time)
if search:
try:
tokens = shlex.split(search)
except ValueError:
tokens = search.split()
core_fields = {
"decky": Log.decky,
"service": Log.service,
"event": Log.event_type,
"attacker": Log.attacker_ip,
"attacker-ip": Log.attacker_ip,
"attacker_ip": Log.attacker_ip,
}
for token in tokens:
if ":" in token:
key, val = token.split(":", 1)
if key in core_fields:
statement = statement.where(core_fields[key] == val)
else:
key_safe = re.sub(r"[^a-zA-Z0-9_]", "", key)
if key_safe:
statement = statement.where(
self._json_field_equals(key_safe)
).params(val=val)
else:
lk = f"%{token}%"
statement = statement.where(
or_(
Log.raw_line.like(lk),
Log.decky.like(lk),
Log.service.like(lk),
Log.attacker_ip.like(lk),
)
)
return statement
def _json_field_equals(self, key: str):
"""Return a text() predicate that matches rows where fields->key == :val.
Both SQLite and MySQL expose a ``JSON_EXTRACT`` function; MySQL also
exposes the same function under ``json_extract`` (case-insensitive).
The ``:val`` parameter is bound separately and must be supplied with
``.params(val=...)`` by the caller, which keeps us safe from injection.
"""
return text(f"JSON_EXTRACT(fields, '$.{key}') = :val")
async def get_logs(
self,
limit: int = 50,
offset: int = 0,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
) -> List[dict]:
statement = (
select(Log)
.order_by(desc(Log.timestamp))
.offset(offset)
.limit(limit)
)
statement = self._apply_filters(statement, search, start_time, end_time)
async with self.session_factory() as session:
results = await session.execute(statement)
return [log.model_dump(mode="json") for log in results.scalars().all()]
async def get_max_log_id(self) -> int:
async with self.session_factory() as session:
result = await session.execute(select(func.max(Log.id)))
val = result.scalar()
return val if val is not None else 0
async def get_logs_after_id(
self,
last_id: int,
limit: int = 50,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
) -> List[dict]:
statement = (
select(Log).where(Log.id > last_id).order_by(asc(Log.id)).limit(limit)
)
statement = self._apply_filters(statement, search, start_time, end_time)
async with self.session_factory() as session:
results = await session.execute(statement)
return [log.model_dump(mode="json") for log in results.scalars().all()]
async def get_total_logs(
self,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
) -> int:
statement = select(func.count()).select_from(Log)
statement = self._apply_filters(statement, search, start_time, end_time)
async with self.session_factory() as session:
result = await session.execute(statement)
return result.scalar() or 0
async def get_log_histogram(
self,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
interval_minutes: int = 15,
) -> List[dict]:
"""Dialect-specific — override per backend."""
raise NotImplementedError
async def get_stats_summary(self) -> dict[str, Any]:
async with self.session_factory() as session:
total_logs = (
await session.execute(select(func.count()).select_from(Log))
).scalar() or 0
unique_attackers = (
await session.execute(
select(func.count(func.distinct(Log.attacker_ip)))
)
).scalar() or 0
_state = await asyncio.to_thread(load_state)
deployed_deckies = len(_state[0].deckies) if _state else 0
return {
"total_logs": total_logs,
"unique_attackers": unique_attackers,
"active_deckies": deployed_deckies,
"deployed_deckies": deployed_deckies,
}
async def get_deckies(self) -> List[dict]:
_state = await asyncio.to_thread(load_state)
return [_d.model_dump() for _d in _state[0].deckies] if _state else []
# --------------------------------------------------------------- users
async def get_user_by_username(self, username: str) -> Optional[dict]:
async with self.session_factory() as session:
result = await session.execute(
select(User).where(User.username == username)
)
user = result.scalar_one_or_none()
return user.model_dump() if user else None
async def get_user_by_uuid(self, uuid: str) -> Optional[dict]:
async with self.session_factory() as session:
result = await session.execute(
select(User).where(User.uuid == uuid)
)
user = result.scalar_one_or_none()
return user.model_dump() if user else None
async def create_user(self, user_data: dict[str, Any]) -> None:
async with self.session_factory() as session:
session.add(User(**user_data))
await session.commit()
async def update_user_password(
self, uuid: str, password_hash: str, must_change_password: bool = False
) -> None:
async with self.session_factory() as session:
await session.execute(
update(User)
.where(User.uuid == uuid)
.values(
password_hash=password_hash,
must_change_password=must_change_password,
)
)
await session.commit()
async def list_users(self) -> list[dict]:
async with self.session_factory() as session:
result = await session.execute(select(User))
return [u.model_dump() for u in result.scalars().all()]
async def delete_user(self, uuid: str) -> bool:
async with self.session_factory() as session:
result = await session.execute(select(User).where(User.uuid == uuid))
user = result.scalar_one_or_none()
if not user:
return False
await session.delete(user)
await session.commit()
return True
async def update_user_role(self, uuid: str, role: str) -> None:
async with self.session_factory() as session:
await session.execute(
update(User).where(User.uuid == uuid).values(role=role)
)
await session.commit()
async def purge_logs_and_bounties(self) -> dict[str, int]:
async with self.session_factory() as session:
logs_deleted = (await session.execute(text("DELETE FROM logs"))).rowcount
bounties_deleted = (await session.execute(text("DELETE FROM bounty"))).rowcount
# attacker_behavior has FK → attackers.uuid; delete children first.
await session.execute(text("DELETE FROM attacker_behavior"))
attackers_deleted = (await session.execute(text("DELETE FROM attackers"))).rowcount
await session.commit()
return {
"logs": logs_deleted,
"bounties": bounties_deleted,
"attackers": attackers_deleted,
}
# ------------------------------------------------------------ bounties
async def add_bounty(self, bounty_data: dict[str, Any]) -> None:
data = bounty_data.copy()
if "payload" in data and isinstance(data["payload"], dict):
data["payload"] = json.dumps(data["payload"])
async with self.session_factory() as session:
dup = await session.execute(
select(Bounty.id).where(
Bounty.bounty_type == data.get("bounty_type"),
Bounty.attacker_ip == data.get("attacker_ip"),
Bounty.payload == data.get("payload"),
).limit(1)
)
if dup.first() is not None:
return
session.add(Bounty(**data))
await session.commit()
def _apply_bounty_filters(
self,
statement: SelectOfScalar,
bounty_type: Optional[str],
search: Optional[str],
) -> SelectOfScalar:
if bounty_type:
statement = statement.where(Bounty.bounty_type == bounty_type)
if search:
lk = f"%{search}%"
statement = statement.where(
or_(
Bounty.decky.like(lk),
Bounty.service.like(lk),
Bounty.attacker_ip.like(lk),
Bounty.payload.like(lk),
)
)
return statement
async def get_bounties(
self,
limit: int = 50,
offset: int = 0,
bounty_type: Optional[str] = None,
search: Optional[str] = None,
) -> List[dict]:
statement = (
select(Bounty)
.order_by(desc(Bounty.timestamp))
.offset(offset)
.limit(limit)
)
statement = self._apply_bounty_filters(statement, bounty_type, search)
async with self.session_factory() as session:
results = await session.execute(statement)
final = []
for item in results.scalars().all():
d = item.model_dump(mode="json")
try:
d["payload"] = json.loads(d["payload"])
except (json.JSONDecodeError, TypeError):
pass
final.append(d)
return final
async def get_total_bounties(
self, bounty_type: Optional[str] = None, search: Optional[str] = None
) -> int:
statement = select(func.count()).select_from(Bounty)
statement = self._apply_bounty_filters(statement, bounty_type, search)
async with self.session_factory() as session:
result = await session.execute(statement)
return result.scalar() or 0
async def get_state(self, key: str) -> Optional[dict[str, Any]]:
async with self.session_factory() as session:
statement = select(State).where(State.key == key)
result = await session.execute(statement)
state = result.scalar_one_or_none()
if state:
return json.loads(state.value)
return None
async def set_state(self, key: str, value: Any) -> None: # noqa: ANN401
async with self.session_factory() as session:
statement = select(State).where(State.key == key)
result = await session.execute(statement)
state = result.scalar_one_or_none()
value_json = json.dumps(value)
if state:
state.value = value_json
session.add(state)
else:
session.add(State(key=key, value=value_json))
await session.commit()
# ----------------------------------------------------------- attackers
async def get_all_bounties_by_ip(self) -> dict[str, List[dict[str, Any]]]:
from collections import defaultdict
async with self.session_factory() as session:
result = await session.execute(
select(Bounty).order_by(asc(Bounty.timestamp))
)
grouped: dict[str, List[dict[str, Any]]] = defaultdict(list)
for item in result.scalars().all():
d = item.model_dump(mode="json")
try:
d["payload"] = json.loads(d["payload"])
except (json.JSONDecodeError, TypeError):
pass
grouped[item.attacker_ip].append(d)
return dict(grouped)
async def get_bounties_for_ips(self, ips: set[str]) -> dict[str, List[dict[str, Any]]]:
from collections import defaultdict
async with self.session_factory() as session:
result = await session.execute(
select(Bounty).where(Bounty.attacker_ip.in_(ips)).order_by(asc(Bounty.timestamp))
)
grouped: dict[str, List[dict[str, Any]]] = defaultdict(list)
for item in result.scalars().all():
d = item.model_dump(mode="json")
try:
d["payload"] = json.loads(d["payload"])
except (json.JSONDecodeError, TypeError):
pass
grouped[item.attacker_ip].append(d)
return dict(grouped)
async def upsert_attacker(self, data: dict[str, Any]) -> str:
async with self.session_factory() as session:
result = await session.execute(
select(Attacker).where(Attacker.ip == data["ip"])
)
existing = result.scalar_one_or_none()
if existing:
for k, v in data.items():
setattr(existing, k, v)
session.add(existing)
row_uuid = existing.uuid
else:
row_uuid = str(uuid.uuid4())
data = {**data, "uuid": row_uuid}
session.add(Attacker(**data))
await session.commit()
return row_uuid
async def upsert_attacker_behavior(
self,
attacker_uuid: str,
data: dict[str, Any],
) -> None:
async with self.session_factory() as session:
result = await session.execute(
select(AttackerBehavior).where(
AttackerBehavior.attacker_uuid == attacker_uuid
)
)
existing = result.scalar_one_or_none()
payload = {**data, "updated_at": datetime.now(timezone.utc)}
if existing:
for k, v in payload.items():
setattr(existing, k, v)
session.add(existing)
else:
session.add(AttackerBehavior(attacker_uuid=attacker_uuid, **payload))
await session.commit()
async def get_attacker_behavior(
self,
attacker_uuid: str,
) -> Optional[dict[str, Any]]:
async with self.session_factory() as session:
result = await session.execute(
select(AttackerBehavior).where(
AttackerBehavior.attacker_uuid == attacker_uuid
)
)
row = result.scalar_one_or_none()
if not row:
return None
return self._deserialize_behavior(row.model_dump(mode="json"))
async def get_behaviors_for_ips(
self,
ips: set[str],
) -> dict[str, dict[str, Any]]:
if not ips:
return {}
async with self.session_factory() as session:
result = await session.execute(
select(Attacker.ip, AttackerBehavior)
.join(AttackerBehavior, Attacker.uuid == AttackerBehavior.attacker_uuid)
.where(Attacker.ip.in_(ips))
)
out: dict[str, dict[str, Any]] = {}
for ip, row in result.all():
out[ip] = self._deserialize_behavior(row.model_dump(mode="json"))
return out
@staticmethod
def _deserialize_behavior(d: dict[str, Any]) -> dict[str, Any]:
for key in ("tcp_fingerprint", "timing_stats", "phase_sequence"):
if isinstance(d.get(key), str):
try:
d[key] = json.loads(d[key])
except (json.JSONDecodeError, TypeError):
pass
# Deserialize tool_guesses JSON array; normalise None → [].
raw = d.get("tool_guesses")
if isinstance(raw, str):
try:
parsed = json.loads(raw)
d["tool_guesses"] = parsed if isinstance(parsed, list) else [parsed]
except (json.JSONDecodeError, TypeError):
d["tool_guesses"] = []
elif raw is None:
d["tool_guesses"] = []
return d
@staticmethod
def _deserialize_attacker(d: dict[str, Any]) -> dict[str, Any]:
for key in ("services", "deckies", "fingerprints", "commands"):
if isinstance(d.get(key), str):
try:
d[key] = json.loads(d[key])
except (json.JSONDecodeError, TypeError):
pass
return d
async def get_attacker_by_uuid(self, uuid: str) -> Optional[dict[str, Any]]:
async with self.session_factory() as session:
result = await session.execute(
select(Attacker).where(Attacker.uuid == uuid)
)
attacker = result.scalar_one_or_none()
if not attacker:
return None
return self._deserialize_attacker(attacker.model_dump(mode="json"))
async def get_attackers(
self,
limit: int = 50,
offset: int = 0,
search: Optional[str] = None,
sort_by: str = "recent",
service: Optional[str] = None,
) -> List[dict[str, Any]]:
order = {
"active": desc(Attacker.event_count),
"traversals": desc(Attacker.is_traversal),
}.get(sort_by, desc(Attacker.last_seen))
statement = select(Attacker).order_by(order).offset(offset).limit(limit)
if search:
statement = statement.where(Attacker.ip.like(f"%{search}%"))
if service:
statement = statement.where(Attacker.services.like(f'%"{service}"%'))
async with self.session_factory() as session:
result = await session.execute(statement)
return [
self._deserialize_attacker(a.model_dump(mode="json"))
for a in result.scalars().all()
]
async def get_total_attackers(
self, search: Optional[str] = None, service: Optional[str] = None
) -> int:
statement = select(func.count()).select_from(Attacker)
if search:
statement = statement.where(Attacker.ip.like(f"%{search}%"))
if service:
statement = statement.where(Attacker.services.like(f'%"{service}"%'))
async with self.session_factory() as session:
result = await session.execute(statement)
return result.scalar() or 0
async def get_attacker_commands(
self,
uuid: str,
limit: int = 50,
offset: int = 0,
service: Optional[str] = None,
) -> dict[str, Any]:
async with self.session_factory() as session:
result = await session.execute(
select(Attacker.commands).where(Attacker.uuid == uuid)
)
raw = result.scalar_one_or_none()
if raw is None:
return {"total": 0, "data": []}
commands: list = json.loads(raw) if isinstance(raw, str) else raw
if service:
commands = [c for c in commands if c.get("service") == service]
total = len(commands)
page = commands[offset: offset + limit]
return {"total": total, "data": page}

View File

@@ -1,7 +1,7 @@
from typing import Any, Optional from typing import Any, Optional
import jwt import jwt
from fastapi import Depends, HTTPException, status, Request from fastapi import HTTPException, status, Request
from fastapi.security import OAuth2PasswordBearer from fastapi.security import OAuth2PasswordBearer
from decnet.web.auth import ALGORITHM, SECRET_KEY from decnet.web.auth import ALGORITHM, SECRET_KEY
@@ -96,44 +96,3 @@ async def get_current_user_unchecked(request: Request) -> str:
Use only for endpoints that must remain reachable with the flag set (e.g. change-password). Use only for endpoints that must remain reachable with the flag set (e.g. change-password).
""" """
return await _decode_token(request) return await _decode_token(request)
# ---------------------------------------------------------------------------
# Role-based access control
# ---------------------------------------------------------------------------
def require_role(*allowed_roles: str):
"""Factory that returns a FastAPI dependency enforcing role membership.
The returned dependency chains from ``get_current_user`` (JWT + must_change_password)
then verifies the user's role is in *allowed_roles*. Returns the full user dict so
endpoints can inspect ``user["uuid"]``, ``user["role"]``, etc. without a second lookup.
"""
async def _check(current_user: str = Depends(get_current_user)) -> dict:
user = await repo.get_user_by_uuid(current_user)
if not user or user["role"] not in allowed_roles:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Insufficient permissions",
)
return user
return _check
def require_stream_role(*allowed_roles: str):
"""Like ``require_role`` but for SSE endpoints that accept a query-param token."""
async def _check(request: Request, token: Optional[str] = None) -> dict:
user_uuid = await get_stream_user(request, token)
user = await repo.get_user_by_uuid(user_uuid)
if not user or user["role"] not in allowed_roles:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Insufficient permissions",
)
return user
return _check
require_admin = require_role("admin")
require_viewer = require_role("viewer", "admin")
require_stream_viewer = require_stream_role("viewer", "admin")

View File

@@ -1,22 +1,13 @@
import asyncio import asyncio
import os import os
import logging
import json import json
from typing import Any from typing import Any
from pathlib import Path from pathlib import Path
from decnet.logging import get_logger
from decnet.telemetry import (
traced as _traced,
get_tracer as _get_tracer,
extract_context as _extract_ctx,
start_span_with_context as _start_span,
)
from decnet.web.db.repository import BaseRepository from decnet.web.db.repository import BaseRepository
logger = get_logger("api") logger: logging.Logger = logging.getLogger("decnet.web.ingester")
_INGEST_STATE_KEY = "ingest_worker_position"
async def log_ingestion_worker(repo: BaseRepository) -> None: async def log_ingestion_worker(repo: BaseRepository) -> None:
""" """
@@ -29,11 +20,9 @@ async def log_ingestion_worker(repo: BaseRepository) -> None:
return return
_json_log_path: Path = Path(_base_log_file).with_suffix(".json") _json_log_path: Path = Path(_base_log_file).with_suffix(".json")
_position: int = 0
_saved = await repo.get_state(_INGEST_STATE_KEY) logger.info(f"Starting JSON log ingestion from {_json_log_path}")
_position: int = _saved.get("position", 0) if _saved else 0
logger.info("ingest worker started path=%s position=%d", _json_log_path, _position)
while True: while True:
try: try:
@@ -45,7 +34,6 @@ async def log_ingestion_worker(repo: BaseRepository) -> None:
if _stat.st_size < _position: if _stat.st_size < _position:
# File rotated or truncated # File rotated or truncated
_position = 0 _position = 0
await repo.set_state(_INGEST_STATE_KEY, {"position": 0})
if _stat.st_size == _position: if _stat.st_size == _position:
# No new data # No new data
@@ -65,49 +53,27 @@ async def log_ingestion_worker(repo: BaseRepository) -> None:
try: try:
_log_data: dict[str, Any] = json.loads(_line.strip()) _log_data: dict[str, Any] = json.loads(_line.strip())
# Extract trace context injected by the collector. await repo.add_log(_log_data)
# This makes the ingester span a child of the collector span, await _extract_bounty(repo, _log_data)
# showing the full event journey in Jaeger.
_parent_ctx = _extract_ctx(_log_data)
_tracer = _get_tracer("ingester")
with _start_span(_tracer, "ingester.process_record", context=_parent_ctx) as _span:
_span.set_attribute("decky", _log_data.get("decky", ""))
_span.set_attribute("service", _log_data.get("service", ""))
_span.set_attribute("event_type", _log_data.get("event_type", ""))
_span.set_attribute("attacker_ip", _log_data.get("attacker_ip", ""))
# Persist trace context in the DB row so the SSE
# read path can link back to this ingestion trace.
_sctx = getattr(_span, "get_span_context", None)
if _sctx:
_ctx = _sctx()
if _ctx and getattr(_ctx, "trace_id", 0):
_log_data["trace_id"] = format(_ctx.trace_id, "032x")
_log_data["span_id"] = format(_ctx.span_id, "016x")
logger.debug("ingest: record decky=%s event_type=%s", _log_data.get("decky"), _log_data.get("event_type"))
await repo.add_log(_log_data)
await _extract_bounty(repo, _log_data)
except json.JSONDecodeError: except json.JSONDecodeError:
logger.error("ingest: failed to decode JSON log line: %s", _line.strip()) logger.error(f"Failed to decode JSON log line: {_line}")
continue continue
# Update position after successful line read # Update position after successful line read
_position = _f.tell() _position = _f.tell()
await repo.set_state(_INGEST_STATE_KEY, {"position": _position})
except Exception as _e: except Exception as _e:
_err_str = str(_e).lower() _err_str = str(_e).lower()
if "no such table" in _err_str or "no active connection" in _err_str or "connection closed" in _err_str: if "no such table" in _err_str or "no active connection" in _err_str or "connection closed" in _err_str:
logger.error("ingest: post-shutdown or fatal DB error: %s", _e) logger.error(f"Post-shutdown or fatal DB error in ingester: {_e}")
break # Exit worker — DB is gone or uninitialized break # Exit worker — DB is gone or uninitialized
logger.error("ingest: error in worker: %s", _e) logger.error(f"Error in log ingestion worker: {_e}")
await asyncio.sleep(5) await asyncio.sleep(5)
await asyncio.sleep(1) await asyncio.sleep(1)
@_traced("ingester.extract_bounty")
async def _extract_bounty(repo: BaseRepository, log_data: dict[str, Any]) -> None: async def _extract_bounty(repo: BaseRepository, log_data: dict[str, Any]) -> None:
"""Detect and extract valuable artifacts (bounties) from log entries.""" """Detect and extract valuable artifacts (bounties) from log entries."""
_fields = log_data.get("fields") _fields = log_data.get("fields")
@@ -130,180 +96,4 @@ async def _extract_bounty(repo: BaseRepository, log_data: dict[str, Any]) -> Non
} }
}) })
# 2. HTTP User-Agent fingerprint # 2. Add more extractors here later (e.g. file hashes, crypto keys)
_h_raw = _fields.get("headers")
if isinstance(_h_raw, dict):
_headers = _h_raw
elif isinstance(_h_raw, str):
try:
_parsed = json.loads(_h_raw)
_headers = _parsed if isinstance(_parsed, dict) else {}
except (json.JSONDecodeError, ValueError):
_headers = {}
else:
_headers = {}
_ua = _headers.get("User-Agent") or _headers.get("user-agent")
if _ua:
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": log_data.get("service"),
"attacker_ip": log_data.get("attacker_ip"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "http_useragent",
"value": _ua,
"method": _fields.get("method"),
"path": _fields.get("path"),
}
})
# 3. VNC client version fingerprint
_vnc_ver = _fields.get("client_version")
if _vnc_ver and log_data.get("event_type") == "version":
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": log_data.get("service"),
"attacker_ip": log_data.get("attacker_ip"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "vnc_client_version",
"value": _vnc_ver,
}
})
# 4. SSH client banner fingerprint (deferred — requires asyncssh server)
# Fires on: service=ssh, event_type=client_banner, fields.client_banner
# 5. JA3/JA3S TLS fingerprint from sniffer container
_ja3 = _fields.get("ja3")
if _ja3 and log_data.get("service") == "sniffer":
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": "sniffer",
"attacker_ip": log_data.get("attacker_ip"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "ja3",
"ja3": _ja3,
"ja3s": _fields.get("ja3s"),
"ja4": _fields.get("ja4"),
"ja4s": _fields.get("ja4s"),
"tls_version": _fields.get("tls_version"),
"sni": _fields.get("sni") or None,
"alpn": _fields.get("alpn") or None,
"dst_port": _fields.get("dst_port"),
"raw_ciphers": _fields.get("raw_ciphers"),
"raw_extensions": _fields.get("raw_extensions"),
},
})
# 6. JA4L latency fingerprint from sniffer
_ja4l_rtt = _fields.get("ja4l_rtt_ms")
if _ja4l_rtt and log_data.get("service") == "sniffer":
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": "sniffer",
"attacker_ip": log_data.get("attacker_ip"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "ja4l",
"rtt_ms": _ja4l_rtt,
"client_ttl": _fields.get("ja4l_client_ttl"),
},
})
# 7. TLS session resumption behavior
_resumption = _fields.get("resumption")
if _resumption and log_data.get("service") == "sniffer":
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": "sniffer",
"attacker_ip": log_data.get("attacker_ip"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "tls_resumption",
"mechanisms": _resumption,
},
})
# 8. TLS certificate details (TLS 1.2 only — passive extraction)
_subject_cn = _fields.get("subject_cn")
if _subject_cn and log_data.get("service") == "sniffer":
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": "sniffer",
"attacker_ip": log_data.get("attacker_ip"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "tls_certificate",
"subject_cn": _subject_cn,
"issuer": _fields.get("issuer"),
"self_signed": _fields.get("self_signed"),
"not_before": _fields.get("not_before"),
"not_after": _fields.get("not_after"),
"sans": _fields.get("sans"),
"sni": _fields.get("sni") or None,
},
})
# 9. JARM fingerprint from active prober
_jarm = _fields.get("jarm_hash")
if _jarm and log_data.get("service") == "prober":
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": "prober",
"attacker_ip": _fields.get("target_ip", "Unknown"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "jarm",
"hash": _jarm,
"target_ip": _fields.get("target_ip"),
"target_port": _fields.get("target_port"),
},
})
# 10. HASSHServer fingerprint from active prober
_hassh = _fields.get("hassh_server_hash")
if _hassh and log_data.get("service") == "prober":
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": "prober",
"attacker_ip": _fields.get("target_ip", "Unknown"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "hassh_server",
"hash": _hassh,
"target_ip": _fields.get("target_ip"),
"target_port": _fields.get("target_port"),
"ssh_banner": _fields.get("ssh_banner"),
"kex_algorithms": _fields.get("kex_algorithms"),
"encryption_s2c": _fields.get("encryption_s2c"),
"mac_s2c": _fields.get("mac_s2c"),
"compression_s2c": _fields.get("compression_s2c"),
},
})
# 11. TCP/IP stack fingerprint from active prober
_tcpfp = _fields.get("tcpfp_hash")
if _tcpfp and log_data.get("service") == "prober":
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": "prober",
"attacker_ip": _fields.get("target_ip", "Unknown"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "tcpfp",
"hash": _tcpfp,
"raw": _fields.get("tcpfp_raw"),
"target_ip": _fields.get("target_ip"),
"target_port": _fields.get("target_port"),
"ttl": _fields.get("ttl"),
"window_size": _fields.get("window_size"),
"df_bit": _fields.get("df_bit"),
"mss": _fields.get("mss"),
"window_scale": _fields.get("window_scale"),
"sack_ok": _fields.get("sack_ok"),
"timestamp": _fields.get("timestamp"),
"options_order": _fields.get("options_order"),
},
})

View File

@@ -11,14 +11,6 @@ from .fleet.api_mutate_decky import router as mutate_decky_router
from .fleet.api_mutate_interval import router as mutate_interval_router from .fleet.api_mutate_interval import router as mutate_interval_router
from .fleet.api_deploy_deckies import router as deploy_deckies_router from .fleet.api_deploy_deckies import router as deploy_deckies_router
from .stream.api_stream_events import router as stream_router from .stream.api_stream_events import router as stream_router
from .attackers.api_get_attackers import router as attackers_router
from .attackers.api_get_attacker_detail import router as attacker_detail_router
from .attackers.api_get_attacker_commands import router as attacker_commands_router
from .config.api_get_config import router as config_get_router
from .config.api_update_config import router as config_update_router
from .config.api_manage_users import router as config_users_router
from .config.api_reinit import router as config_reinit_router
from .health.api_get_health import router as health_router
api_router = APIRouter() api_router = APIRouter()
@@ -39,18 +31,6 @@ api_router.include_router(mutate_decky_router)
api_router.include_router(mutate_interval_router) api_router.include_router(mutate_interval_router)
api_router.include_router(deploy_deckies_router) api_router.include_router(deploy_deckies_router)
# Attacker Profiles
api_router.include_router(attackers_router)
api_router.include_router(attacker_detail_router)
api_router.include_router(attacker_commands_router)
# Observability # Observability
api_router.include_router(stats_router) api_router.include_router(stats_router)
api_router.include_router(stream_router) api_router.include_router(stream_router)
api_router.include_router(health_router)
# Configuration
api_router.include_router(config_get_router)
api_router.include_router(config_update_router)
api_router.include_router(config_users_router)
api_router.include_router(config_reinit_router)

View File

@@ -1,41 +0,0 @@
from typing import Any, Optional
from fastapi import APIRouter, Depends, HTTPException, Query
from decnet.telemetry import traced as _traced
from decnet.web.dependencies import require_viewer, repo
router = APIRouter()
@router.get(
"/attackers/{uuid}/commands",
tags=["Attacker Profiles"],
responses={
401: {"description": "Could not validate credentials"},
403: {"description": "Insufficient permissions"},
404: {"description": "Attacker not found"},
},
)
@_traced("api.get_attacker_commands")
async def get_attacker_commands(
uuid: str,
limit: int = Query(50, ge=1, le=200),
offset: int = Query(0, ge=0, le=2147483647),
service: Optional[str] = None,
user: dict = Depends(require_viewer),
) -> dict[str, Any]:
"""Retrieve paginated commands for an attacker profile."""
attacker = await repo.get_attacker_by_uuid(uuid)
if not attacker:
raise HTTPException(status_code=404, detail="Attacker not found")
def _norm(v: Optional[str]) -> Optional[str]:
if v in (None, "null", "NULL", "undefined", ""):
return None
return v
result = await repo.get_attacker_commands(
uuid=uuid, limit=limit, offset=offset, service=_norm(service),
)
return {"total": result["total"], "limit": limit, "offset": offset, "data": result["data"]}

View File

@@ -1,30 +0,0 @@
from typing import Any
from fastapi import APIRouter, Depends, HTTPException
from decnet.telemetry import traced as _traced
from decnet.web.dependencies import require_viewer, repo
router = APIRouter()
@router.get(
"/attackers/{uuid}",
tags=["Attacker Profiles"],
responses={
401: {"description": "Could not validate credentials"},
403: {"description": "Insufficient permissions"},
404: {"description": "Attacker not found"},
},
)
@_traced("api.get_attacker_detail")
async def get_attacker_detail(
uuid: str,
user: dict = Depends(require_viewer),
) -> dict[str, Any]:
"""Retrieve a single attacker profile by UUID (with behavior block)."""
attacker = await repo.get_attacker_by_uuid(uuid)
if not attacker:
raise HTTPException(status_code=404, detail="Attacker not found")
attacker["behavior"] = await repo.get_attacker_behavior(uuid)
return attacker

View File

@@ -1,48 +0,0 @@
from typing import Any, Optional
from fastapi import APIRouter, Depends, Query
from decnet.telemetry import traced as _traced
from decnet.web.dependencies import require_viewer, repo
from decnet.web.db.models import AttackersResponse
router = APIRouter()
@router.get(
"/attackers",
response_model=AttackersResponse,
tags=["Attacker Profiles"],
responses={
401: {"description": "Could not validate credentials"},
403: {"description": "Insufficient permissions"},
422: {"description": "Validation error"},
},
)
@_traced("api.get_attackers")
async def get_attackers(
limit: int = Query(50, ge=1, le=1000),
offset: int = Query(0, ge=0, le=2147483647),
search: Optional[str] = None,
sort_by: str = Query("recent", pattern="^(recent|active|traversals)$"),
service: Optional[str] = None,
user: dict = Depends(require_viewer),
) -> dict[str, Any]:
"""Retrieve paginated attacker profiles."""
def _norm(v: Optional[str]) -> Optional[str]:
if v in (None, "null", "NULL", "undefined", ""):
return None
return v
s = _norm(search)
svc = _norm(service)
_data = await repo.get_attackers(limit=limit, offset=offset, search=s, sort_by=sort_by, service=svc)
_total = await repo.get_total_attackers(search=s, service=svc)
# Bulk-join behavior rows for the IPs in this page to avoid N+1 queries.
_ips = {row["ip"] for row in _data if row.get("ip")}
_behaviors = await repo.get_behaviors_for_ips(_ips) if _ips else {}
for row in _data:
row["behavior"] = _behaviors.get(row.get("ip"))
return {"total": _total, "limit": limit, "offset": offset, "data": _data}

View File

@@ -2,7 +2,6 @@ from typing import Any, Optional
from fastapi import APIRouter, Depends, HTTPException, status from fastapi import APIRouter, Depends, HTTPException, status
from decnet.telemetry import traced as _traced
from decnet.web.auth import get_password_hash, verify_password from decnet.web.auth import get_password_hash, verify_password
from decnet.web.dependencies import get_current_user_unchecked, repo from decnet.web.dependencies import get_current_user_unchecked, repo
from decnet.web.db.models import ChangePasswordRequest from decnet.web.db.models import ChangePasswordRequest
@@ -19,7 +18,6 @@ router = APIRouter()
422: {"description": "Validation error"} 422: {"description": "Validation error"}
}, },
) )
@_traced("api.change_password")
async def change_password(request: ChangePasswordRequest, current_user: str = Depends(get_current_user_unchecked)) -> dict[str, str]: async def change_password(request: ChangePasswordRequest, current_user: str = Depends(get_current_user_unchecked)) -> dict[str, str]:
_user: Optional[dict[str, Any]] = await repo.get_user_by_uuid(current_user) _user: Optional[dict[str, Any]] = await repo.get_user_by_uuid(current_user)
if not _user or not verify_password(request.old_password, _user["password_hash"]): if not _user or not verify_password(request.old_password, _user["password_hash"]):

View File

@@ -3,7 +3,6 @@ from typing import Any, Optional
from fastapi import APIRouter, HTTPException, status from fastapi import APIRouter, HTTPException, status
from decnet.telemetry import traced as _traced
from decnet.web.auth import ( from decnet.web.auth import (
ACCESS_TOKEN_EXPIRE_MINUTES, ACCESS_TOKEN_EXPIRE_MINUTES,
create_access_token, create_access_token,
@@ -25,7 +24,6 @@ router = APIRouter()
422: {"description": "Validation error"} 422: {"description": "Validation error"}
}, },
) )
@_traced("api.login")
async def login(request: LoginRequest) -> dict[str, Any]: async def login(request: LoginRequest) -> dict[str, Any]:
_user: Optional[dict[str, Any]] = await repo.get_user_by_username(request.username) _user: Optional[dict[str, Any]] = await repo.get_user_by_username(request.username)
if not _user or not verify_password(request.password, _user["password_hash"]): if not _user or not verify_password(request.password, _user["password_hash"]):
@@ -42,6 +40,6 @@ async def login(request: LoginRequest) -> dict[str, Any]:
) )
return { return {
"access_token": _access_token, "access_token": _access_token,
"token_type": "bearer", # nosec B105 — OAuth2 token type, not a password "token_type": "bearer", # nosec B105
"must_change_password": bool(_user.get("must_change_password", False)) "must_change_password": bool(_user.get("must_change_password", False))
} }

View File

@@ -2,22 +2,20 @@ from typing import Any, Optional
from fastapi import APIRouter, Depends, Query from fastapi import APIRouter, Depends, Query
from decnet.telemetry import traced as _traced from decnet.web.dependencies import get_current_user, repo
from decnet.web.dependencies import require_viewer, repo
from decnet.web.db.models import BountyResponse from decnet.web.db.models import BountyResponse
router = APIRouter() router = APIRouter()
@router.get("/bounty", response_model=BountyResponse, tags=["Bounty Vault"], @router.get("/bounty", response_model=BountyResponse, tags=["Bounty Vault"],
responses={401: {"description": "Could not validate credentials"}, 403: {"description": "Insufficient permissions"}, 422: {"description": "Validation error"}},) responses={401: {"description": "Could not validate credentials"}, 422: {"description": "Validation error"}},)
@_traced("api.get_bounties")
async def get_bounties( async def get_bounties(
limit: int = Query(50, ge=1, le=1000), limit: int = Query(50, ge=1, le=1000),
offset: int = Query(0, ge=0, le=2147483647), offset: int = Query(0, ge=0, le=2147483647),
bounty_type: Optional[str] = None, bounty_type: Optional[str] = None,
search: Optional[str] = None, search: Optional[str] = None,
user: dict = Depends(require_viewer) current_user: str = Depends(get_current_user)
) -> dict[str, Any]: ) -> dict[str, Any]:
"""Retrieve collected bounties (harvested credentials, payloads, etc.).""" """Retrieve collected bounties (harvested credentials, payloads, etc.)."""
def _norm(v: Optional[str]) -> Optional[str]: def _norm(v: Optional[str]) -> Optional[str]:

View File

@@ -1,58 +0,0 @@
from fastapi import APIRouter, Depends
from decnet.env import DECNET_DEVELOPER
from decnet.telemetry import traced as _traced
from decnet.web.dependencies import require_viewer, repo
from decnet.web.db.models import UserResponse
router = APIRouter()
_DEFAULT_DEPLOYMENT_LIMIT = 10
_DEFAULT_MUTATION_INTERVAL = "30m"
@router.get(
"/config",
tags=["Configuration"],
responses={
401: {"description": "Could not validate credentials"},
403: {"description": "Insufficient permissions"},
},
)
@_traced("api.get_config")
async def api_get_config(user: dict = Depends(require_viewer)) -> dict:
limits_state = await repo.get_state("config_limits")
globals_state = await repo.get_state("config_globals")
deployment_limit = (
limits_state.get("deployment_limit", _DEFAULT_DEPLOYMENT_LIMIT)
if limits_state
else _DEFAULT_DEPLOYMENT_LIMIT
)
global_mutation_interval = (
globals_state.get("global_mutation_interval", _DEFAULT_MUTATION_INTERVAL)
if globals_state
else _DEFAULT_MUTATION_INTERVAL
)
base = {
"role": user["role"],
"deployment_limit": deployment_limit,
"global_mutation_interval": global_mutation_interval,
}
if user["role"] == "admin":
all_users = await repo.list_users()
base["users"] = [
UserResponse(
uuid=u["uuid"],
username=u["username"],
role=u["role"],
must_change_password=u["must_change_password"],
).model_dump()
for u in all_users
]
if DECNET_DEVELOPER:
base["developer_mode"] = True
return base

View File

@@ -1,131 +0,0 @@
import uuid as _uuid
from fastapi import APIRouter, Depends, HTTPException
from decnet.telemetry import traced as _traced
from decnet.web.auth import get_password_hash
from decnet.web.dependencies import require_admin, repo
from decnet.web.db.models import (
CreateUserRequest,
UpdateUserRoleRequest,
ResetUserPasswordRequest,
UserResponse,
)
router = APIRouter()
@router.post(
"/config/users",
tags=["Configuration"],
responses={
400: {"description": "Bad Request (e.g. malformed JSON)"},
401: {"description": "Could not validate credentials"},
403: {"description": "Admin access required"},
409: {"description": "Username already exists"},
422: {"description": "Validation error"},
},
)
@_traced("api.create_user")
async def api_create_user(
req: CreateUserRequest,
admin: dict = Depends(require_admin),
) -> UserResponse:
existing = await repo.get_user_by_username(req.username)
if existing:
raise HTTPException(status_code=409, detail="Username already exists")
user_uuid = str(_uuid.uuid4())
await repo.create_user({
"uuid": user_uuid,
"username": req.username,
"password_hash": get_password_hash(req.password),
"role": req.role,
"must_change_password": True, # nosec B105 — not a password
})
return UserResponse(
uuid=user_uuid,
username=req.username,
role=req.role,
must_change_password=True,
)
@router.delete(
"/config/users/{user_uuid}",
tags=["Configuration"],
responses={
401: {"description": "Could not validate credentials"},
403: {"description": "Admin access required / cannot delete self"},
404: {"description": "User not found"},
},
)
@_traced("api.delete_user")
async def api_delete_user(
user_uuid: str,
admin: dict = Depends(require_admin),
) -> dict[str, str]:
if user_uuid == admin["uuid"]:
raise HTTPException(status_code=403, detail="Cannot delete your own account")
deleted = await repo.delete_user(user_uuid)
if not deleted:
raise HTTPException(status_code=404, detail="User not found")
return {"message": "User deleted"}
@router.put(
"/config/users/{user_uuid}/role",
tags=["Configuration"],
responses={
400: {"description": "Bad Request (e.g. malformed JSON)"},
401: {"description": "Could not validate credentials"},
403: {"description": "Admin access required / cannot change own role"},
404: {"description": "User not found"},
422: {"description": "Validation error"},
},
)
@_traced("api.update_user_role")
async def api_update_user_role(
user_uuid: str,
req: UpdateUserRoleRequest,
admin: dict = Depends(require_admin),
) -> dict[str, str]:
if user_uuid == admin["uuid"]:
raise HTTPException(status_code=403, detail="Cannot change your own role")
target = await repo.get_user_by_uuid(user_uuid)
if not target:
raise HTTPException(status_code=404, detail="User not found")
await repo.update_user_role(user_uuid, req.role)
return {"message": "User role updated"}
@router.put(
"/config/users/{user_uuid}/reset-password",
tags=["Configuration"],
responses={
400: {"description": "Bad Request (e.g. malformed JSON)"},
401: {"description": "Could not validate credentials"},
403: {"description": "Admin access required"},
404: {"description": "User not found"},
422: {"description": "Validation error"},
},
)
@_traced("api.reset_user_password")
async def api_reset_user_password(
user_uuid: str,
req: ResetUserPasswordRequest,
admin: dict = Depends(require_admin),
) -> dict[str, str]:
target = await repo.get_user_by_uuid(user_uuid)
if not target:
raise HTTPException(status_code=404, detail="User not found")
await repo.update_user_password(
user_uuid,
get_password_hash(req.new_password),
must_change_password=True,
)
return {"message": "Password reset successfully"}

View File

@@ -1,27 +0,0 @@
from fastapi import APIRouter, Depends, HTTPException
from decnet.env import DECNET_DEVELOPER
from decnet.telemetry import traced as _traced
from decnet.web.dependencies import require_admin, repo
router = APIRouter()
@router.delete(
"/config/reinit",
tags=["Configuration"],
responses={
401: {"description": "Could not validate credentials"},
403: {"description": "Admin access required or developer mode not enabled"},
},
)
@_traced("api.reinit")
async def api_reinit(admin: dict = Depends(require_admin)) -> dict:
if not DECNET_DEVELOPER:
raise HTTPException(status_code=403, detail="Developer mode is not enabled")
counts = await repo.purge_logs_and_bounties()
return {
"message": "Data purged",
"deleted": counts,
}

View File

@@ -1,48 +0,0 @@
from fastapi import APIRouter, Depends
from decnet.telemetry import traced as _traced
from decnet.web.dependencies import require_admin, repo
from decnet.web.db.models import DeploymentLimitRequest, GlobalMutationIntervalRequest
router = APIRouter()
@router.put(
"/config/deployment-limit",
tags=["Configuration"],
responses={
400: {"description": "Bad Request (e.g. malformed JSON)"},
401: {"description": "Could not validate credentials"},
403: {"description": "Admin access required"},
422: {"description": "Validation error"},
},
)
@_traced("api.update_deployment_limit")
async def api_update_deployment_limit(
req: DeploymentLimitRequest,
admin: dict = Depends(require_admin),
) -> dict[str, str]:
await repo.set_state("config_limits", {"deployment_limit": req.deployment_limit})
return {"message": "Deployment limit updated"}
@router.put(
"/config/global-mutation-interval",
tags=["Configuration"],
responses={
400: {"description": "Bad Request (e.g. malformed JSON)"},
401: {"description": "Could not validate credentials"},
403: {"description": "Admin access required"},
422: {"description": "Validation error"},
},
)
@_traced("api.update_global_mutation_interval")
async def api_update_global_mutation_interval(
req: GlobalMutationIntervalRequest,
admin: dict = Depends(require_admin),
) -> dict[str, str]:
await repo.set_state(
"config_globals",
{"global_mutation_interval": req.global_mutation_interval},
)
return {"message": "Global mutation interval updated"}

View File

@@ -1,18 +1,15 @@
import logging
import os import os
from fastapi import APIRouter, Depends, HTTPException from fastapi import APIRouter, Depends, HTTPException
from decnet.logging import get_logger from decnet.config import DEFAULT_MUTATE_INTERVAL, DecnetConfig, _ROOT, log
from decnet.telemetry import traced as _traced
from decnet.config import DEFAULT_MUTATE_INTERVAL, DecnetConfig, _ROOT
from decnet.engine import deploy as _deploy from decnet.engine import deploy as _deploy
from decnet.ini_loader import load_ini_from_string from decnet.ini_loader import load_ini_from_string
from decnet.network import detect_interface, detect_subnet, get_host_ip from decnet.network import detect_interface, detect_subnet, get_host_ip
from decnet.web.dependencies import require_admin, repo from decnet.web.dependencies import get_current_user, repo
from decnet.web.db.models import DeployIniRequest from decnet.web.db.models import DeployIniRequest
log = get_logger("api")
router = APIRouter() router = APIRouter()
@@ -22,14 +19,12 @@ router = APIRouter()
responses={ responses={
400: {"description": "Bad Request (e.g. malformed JSON)"}, 400: {"description": "Bad Request (e.g. malformed JSON)"},
401: {"description": "Could not validate credentials"}, 401: {"description": "Could not validate credentials"},
403: {"description": "Insufficient permissions"},
409: {"description": "Configuration conflict (e.g. invalid IP allocation or network mismatch)"}, 409: {"description": "Configuration conflict (e.g. invalid IP allocation or network mismatch)"},
422: {"description": "Invalid INI config or schema validation error"}, 422: {"description": "Invalid INI config or schema validation error"},
500: {"description": "Deployment failed"} 500: {"description": "Deployment failed"}
} }
) )
@_traced("api.deploy_deckies") async def api_deploy_deckies(req: DeployIniRequest, current_user: str = Depends(get_current_user)) -> dict[str, str]:
async def api_deploy_deckies(req: DeployIniRequest, admin: dict = Depends(require_admin)) -> dict[str, str]:
from decnet.fleet import build_deckies_from_ini from decnet.fleet import build_deckies_from_ini
try: try:
@@ -91,16 +86,6 @@ async def api_deploy_deckies(req: DeployIniRequest, admin: dict = Depends(requir
for new_decky in new_decky_configs: for new_decky in new_decky_configs:
existing_deckies_map[new_decky.name] = new_decky existing_deckies_map[new_decky.name] = new_decky
# Enforce deployment limit
limits_state = await repo.get_state("config_limits")
deployment_limit = limits_state.get("deployment_limit", 10) if limits_state else 10
if len(existing_deckies_map) > deployment_limit:
raise HTTPException(
status_code=409,
detail=f"Deployment would result in {len(existing_deckies_map)} deckies, "
f"exceeding the configured limit of {deployment_limit}",
)
config.deckies = list(existing_deckies_map.values()) config.deckies = list(existing_deckies_map.values())
# We call deploy(config) which regenerates docker-compose and runs `up -d --remove-orphans`. # We call deploy(config) which regenerates docker-compose and runs `up -d --remove-orphans`.
@@ -115,7 +100,7 @@ async def api_deploy_deckies(req: DeployIniRequest, admin: dict = Depends(requir
} }
await repo.set_state("deployment", new_state_payload) await repo.set_state("deployment", new_state_payload)
except Exception as e: except Exception as e:
log.exception("Deployment failed: %s", e) logging.getLogger("decnet.web.api").exception("Deployment failed: %s", e)
raise HTTPException(status_code=500, detail="Deployment failed. Check server logs for details.") raise HTTPException(status_code=500, detail="Deployment failed. Check server logs for details.")
return {"message": "Deckies deployed successfully"} return {"message": "Deckies deployed successfully"}

View File

@@ -2,14 +2,12 @@ from typing import Any
from fastapi import APIRouter, Depends from fastapi import APIRouter, Depends
from decnet.telemetry import traced as _traced from decnet.web.dependencies import get_current_user, repo
from decnet.web.dependencies import require_viewer, repo
router = APIRouter() router = APIRouter()
@router.get("/deckies", tags=["Fleet Management"], @router.get("/deckies", tags=["Fleet Management"],
responses={401: {"description": "Could not validate credentials"}, 403: {"description": "Insufficient permissions"}, 422: {"description": "Validation error"}},) responses={401: {"description": "Could not validate credentials"}, 422: {"description": "Validation error"}},)
@_traced("api.get_deckies") async def get_deckies(current_user: str = Depends(get_current_user)) -> list[dict[str, Any]]:
async def get_deckies(user: dict = Depends(require_viewer)) -> list[dict[str, Any]]:
return await repo.get_deckies() return await repo.get_deckies()

View File

@@ -1,9 +1,8 @@
import os import os
from fastapi import APIRouter, Depends, HTTPException, Path from fastapi import APIRouter, Depends, HTTPException, Path
from decnet.telemetry import traced as _traced
from decnet.mutator import mutate_decky from decnet.mutator import mutate_decky
from decnet.web.dependencies import require_admin, repo from decnet.web.dependencies import get_current_user, repo
router = APIRouter() router = APIRouter()
@@ -11,12 +10,11 @@ router = APIRouter()
@router.post( @router.post(
"/deckies/{decky_name}/mutate", "/deckies/{decky_name}/mutate",
tags=["Fleet Management"], tags=["Fleet Management"],
responses={401: {"description": "Could not validate credentials"}, 403: {"description": "Insufficient permissions"}, 404: {"description": "Decky not found"}} responses={401: {"description": "Could not validate credentials"}, 404: {"description": "Decky not found"}}
) )
@_traced("api.mutate_decky")
async def api_mutate_decky( async def api_mutate_decky(
decky_name: str = Path(..., pattern=r"^[a-z0-9\-]{1,64}$"), decky_name: str = Path(..., pattern=r"^[a-z0-9\-]{1,64}$"),
admin: dict = Depends(require_admin), current_user: str = Depends(get_current_user),
) -> dict[str, str]: ) -> dict[str, str]:
if os.environ.get("DECNET_CONTRACT_TEST") == "true": if os.environ.get("DECNET_CONTRACT_TEST") == "true":
return {"message": f"Successfully mutated {decky_name} (Contract Test Mock)"} return {"message": f"Successfully mutated {decky_name} (Contract Test Mock)"}

View File

@@ -1,8 +1,7 @@
from fastapi import APIRouter, Depends, HTTPException from fastapi import APIRouter, Depends, HTTPException
from decnet.telemetry import traced as _traced
from decnet.config import DecnetConfig from decnet.config import DecnetConfig
from decnet.web.dependencies import require_admin, repo from decnet.web.dependencies import get_current_user, repo
from decnet.web.db.models import MutateIntervalRequest from decnet.web.db.models import MutateIntervalRequest
router = APIRouter() router = APIRouter()
@@ -20,13 +19,11 @@ def _parse_duration(s: str) -> int:
responses={ responses={
400: {"description": "Bad Request (e.g. malformed JSON)"}, 400: {"description": "Bad Request (e.g. malformed JSON)"},
401: {"description": "Could not validate credentials"}, 401: {"description": "Could not validate credentials"},
403: {"description": "Insufficient permissions"},
404: {"description": "No active deployment or decky not found"}, 404: {"description": "No active deployment or decky not found"},
422: {"description": "Validation error"} 422: {"description": "Validation error"}
}, },
) )
@_traced("api.update_mutate_interval") async def api_update_mutate_interval(decky_name: str, req: MutateIntervalRequest, current_user: str = Depends(get_current_user)) -> dict[str, str]:
async def api_update_mutate_interval(decky_name: str, req: MutateIntervalRequest, admin: dict = Depends(require_admin)) -> dict[str, str]:
state_dict = await repo.get_state("deployment") state_dict = await repo.get_state("deployment")
if not state_dict: if not state_dict:
raise HTTPException(status_code=404, detail="No active deployment") raise HTTPException(status_code=404, detail="No active deployment")

View File

@@ -1,83 +0,0 @@
from typing import Any
from fastapi import APIRouter, Depends
from fastapi.responses import JSONResponse
from decnet.telemetry import traced as _traced
from decnet.web.dependencies import require_viewer, repo
from decnet.web.db.models import HealthResponse, ComponentHealth
router = APIRouter()
_OPTIONAL_SERVICES = {"sniffer_worker"}
@router.get(
"/health",
response_model=HealthResponse,
tags=["Observability"],
responses={
401: {"description": "Could not validate credentials"},
403: {"description": "Insufficient permissions"},
503: {"model": HealthResponse, "description": "System unhealthy"},
},
)
@_traced("api.get_health")
async def get_health(user: dict = Depends(require_viewer)) -> Any:
components: dict[str, ComponentHealth] = {}
# 1. Database
try:
await repo.get_total_logs()
components["database"] = ComponentHealth(status="ok")
except Exception as exc:
components["database"] = ComponentHealth(status="failing", detail=str(exc))
# 2. Background workers
from decnet.web.api import get_background_tasks
for name, task in get_background_tasks().items():
if task is None:
components[name] = ComponentHealth(status="failing", detail="not started")
elif task.done():
if task.cancelled():
detail = "cancelled"
else:
exc = task.exception()
detail = f"exited: {exc}" if exc else "exited unexpectedly"
components[name] = ComponentHealth(status="failing", detail=detail)
else:
components[name] = ComponentHealth(status="ok")
# 3. Docker daemon
try:
import docker
client = docker.from_env()
client.ping()
client.close()
components["docker"] = ComponentHealth(status="ok")
except Exception as exc:
components["docker"] = ComponentHealth(status="failing", detail=str(exc))
# Compute overall status
required_failing = any(
c.status == "failing"
for name, c in components.items()
if name not in _OPTIONAL_SERVICES
)
optional_failing = any(
c.status == "failing"
for name, c in components.items()
if name in _OPTIONAL_SERVICES
)
if required_failing:
overall = "unhealthy"
elif optional_failing:
overall = "degraded"
else:
overall = "healthy"
result = HealthResponse(status=overall, components=components)
status_code = 503 if overall == "unhealthy" else 200
return JSONResponse(content=result.model_dump(), status_code=status_code)

View File

@@ -2,21 +2,19 @@ from typing import Any, Optional
from fastapi import APIRouter, Depends, Query from fastapi import APIRouter, Depends, Query
from decnet.telemetry import traced as _traced from decnet.web.dependencies import get_current_user, repo
from decnet.web.dependencies import require_viewer, repo
router = APIRouter() router = APIRouter()
@router.get("/logs/histogram", tags=["Logs"], @router.get("/logs/histogram", tags=["Logs"],
responses={401: {"description": "Could not validate credentials"}, 403: {"description": "Insufficient permissions"}, 422: {"description": "Validation error"}},) responses={401: {"description": "Could not validate credentials"}, 422: {"description": "Validation error"}},)
@_traced("api.get_logs_histogram")
async def get_logs_histogram( async def get_logs_histogram(
search: Optional[str] = None, search: Optional[str] = None,
start_time: Optional[str] = Query(None), start_time: Optional[str] = Query(None),
end_time: Optional[str] = Query(None), end_time: Optional[str] = Query(None),
interval_minutes: int = Query(15, ge=1), interval_minutes: int = Query(15, ge=1),
user: dict = Depends(require_viewer) current_user: str = Depends(get_current_user)
) -> list[dict[str, Any]]: ) -> list[dict[str, Any]]:
def _norm(v: Optional[str]) -> Optional[str]: def _norm(v: Optional[str]) -> Optional[str]:
if v in (None, "null", "NULL", "undefined", ""): if v in (None, "null", "NULL", "undefined", ""):

View File

@@ -2,23 +2,21 @@ from typing import Any, Optional
from fastapi import APIRouter, Depends, Query from fastapi import APIRouter, Depends, Query
from decnet.telemetry import traced as _traced from decnet.web.dependencies import get_current_user, repo
from decnet.web.dependencies import require_viewer, repo
from decnet.web.db.models import LogsResponse from decnet.web.db.models import LogsResponse
router = APIRouter() router = APIRouter()
@router.get("/logs", response_model=LogsResponse, tags=["Logs"], @router.get("/logs", response_model=LogsResponse, tags=["Logs"],
responses={401: {"description": "Could not validate credentials"}, 403: {"description": "Insufficient permissions"}, 422: {"description": "Validation error"}}) responses={401: {"description": "Could not validate credentials"}, 422: {"description": "Validation error"}})
@_traced("api.get_logs")
async def get_logs( async def get_logs(
limit: int = Query(50, ge=1, le=1000), limit: int = Query(50, ge=1, le=1000),
offset: int = Query(0, ge=0, le=2147483647), offset: int = Query(0, ge=0, le=2147483647),
search: Optional[str] = Query(None, max_length=512), search: Optional[str] = Query(None, max_length=512),
start_time: Optional[str] = Query(None), start_time: Optional[str] = Query(None),
end_time: Optional[str] = Query(None), end_time: Optional[str] = Query(None),
user: dict = Depends(require_viewer) current_user: str = Depends(get_current_user)
) -> dict[str, Any]: ) -> dict[str, Any]:
def _norm(v: Optional[str]) -> Optional[str]: def _norm(v: Optional[str]) -> Optional[str]:
if v in (None, "null", "NULL", "undefined", ""): if v in (None, "null", "NULL", "undefined", ""):

View File

@@ -2,15 +2,13 @@ from typing import Any
from fastapi import APIRouter, Depends from fastapi import APIRouter, Depends
from decnet.telemetry import traced as _traced from decnet.web.dependencies import get_current_user, repo
from decnet.web.dependencies import require_viewer, repo
from decnet.web.db.models import StatsResponse from decnet.web.db.models import StatsResponse
router = APIRouter() router = APIRouter()
@router.get("/stats", response_model=StatsResponse, tags=["Observability"], @router.get("/stats", response_model=StatsResponse, tags=["Observability"],
responses={401: {"description": "Could not validate credentials"}, 403: {"description": "Insufficient permissions"}, 422: {"description": "Validation error"}},) responses={401: {"description": "Could not validate credentials"}, 422: {"description": "Validation error"}},)
@_traced("api.get_stats") async def get_stats(current_user: str = Depends(get_current_user)) -> dict[str, Any]:
async def get_stats(user: dict = Depends(require_viewer)) -> dict[str, Any]:
return await repo.get_stats_summary() return await repo.get_stats_summary()

View File

@@ -1,48 +1,19 @@
import json import json
import asyncio import asyncio
import logging
from typing import AsyncGenerator, Optional from typing import AsyncGenerator, Optional
from fastapi import APIRouter, Depends, Query, Request from fastapi import APIRouter, Depends, Query, Request
from fastapi.responses import StreamingResponse from fastapi.responses import StreamingResponse
from decnet.env import DECNET_DEVELOPER from decnet.env import DECNET_DEVELOPER
from decnet.logging import get_logger from decnet.web.dependencies import get_stream_user, repo
from decnet.telemetry import traced as _traced, get_tracer as _get_tracer
from decnet.web.dependencies import require_stream_viewer, repo
log = get_logger("api") log = logging.getLogger(__name__)
router = APIRouter() router = APIRouter()
def _build_trace_links(logs: list[dict]) -> list:
"""Build OTEL span links from persisted trace_id/span_id in log rows.
Returns an empty list when tracing is disabled (no OTEL imports).
"""
try:
from opentelemetry.trace import Link, SpanContext, TraceFlags
except ImportError:
return []
links: list[Link] = []
for entry in logs:
tid = entry.get("trace_id")
sid = entry.get("span_id")
if not tid or not sid or tid == "0":
continue
try:
ctx = SpanContext(
trace_id=int(tid, 16),
span_id=int(sid, 16),
is_remote=True,
trace_flags=TraceFlags(TraceFlags.SAMPLED),
)
links.append(Link(ctx))
except (ValueError, TypeError):
continue
return links
@router.get("/stream", tags=["Observability"], @router.get("/stream", tags=["Observability"],
responses={ responses={
200: { 200: {
@@ -50,11 +21,9 @@ def _build_trace_links(logs: list[dict]) -> list:
"description": "Real-time Server-Sent Events (SSE) stream" "description": "Real-time Server-Sent Events (SSE) stream"
}, },
401: {"description": "Could not validate credentials"}, 401: {"description": "Could not validate credentials"},
403: {"description": "Insufficient permissions"},
422: {"description": "Validation error"} 422: {"description": "Validation error"}
}, },
) )
@_traced("api.stream_events")
async def stream_events( async def stream_events(
request: Request, request: Request,
last_event_id: int = Query(0, alias="lastEventId"), last_event_id: int = Query(0, alias="lastEventId"),
@@ -62,33 +31,26 @@ async def stream_events(
start_time: Optional[str] = None, start_time: Optional[str] = None,
end_time: Optional[str] = None, end_time: Optional[str] = None,
max_output: Optional[int] = Query(None, alias="maxOutput"), max_output: Optional[int] = Query(None, alias="maxOutput"),
user: dict = Depends(require_stream_viewer) current_user: str = Depends(get_stream_user)
) -> StreamingResponse: ) -> StreamingResponse:
# Prefetch the initial snapshot before entering the streaming generator.
# With aiomysql (pure async TCP I/O), the first DB await inside the generator
# fires immediately after the ASGI layer sends the keepalive chunk — the HTTP
# write and the MySQL read compete for asyncio I/O callbacks and the MySQL
# callback can stall. Running these here (normal async context, no streaming)
# avoids that race entirely. aiosqlite is immune because it runs SQLite in a
# thread, decoupled from the event loop's I/O scheduler.
_start_id = last_event_id if last_event_id != 0 else await repo.get_max_log_id()
_initial_stats = await repo.get_stats_summary()
_initial_histogram = await repo.get_log_histogram(
search=search, start_time=start_time, end_time=end_time, interval_minutes=15,
)
async def event_generator() -> AsyncGenerator[str, None]: async def event_generator() -> AsyncGenerator[str, None]:
last_id = _start_id last_id = last_event_id
stats_interval_sec = 10 stats_interval_sec = 10
loops_since_stats = 0 loops_since_stats = 0
emitted_chunks = 0 emitted_chunks = 0
try: try:
yield ": keepalive\n\n" # flush headers immediately if last_id == 0:
last_id = await repo.get_max_log_id()
# Emit pre-fetched initial snapshot — no DB calls in generator until the loop # Emit initial snapshot immediately so the client never needs to poll /stats
yield f"event: message\ndata: {json.dumps({'type': 'stats', 'data': _initial_stats})}\n\n" stats = await repo.get_stats_summary()
yield f"event: message\ndata: {json.dumps({'type': 'histogram', 'data': _initial_histogram})}\n\n" yield f"event: message\ndata: {json.dumps({'type': 'stats', 'data': stats})}\n\n"
histogram = await repo.get_log_histogram(
search=search, start_time=start_time,
end_time=end_time, interval_minutes=15,
)
yield f"event: message\ndata: {json.dumps({'type': 'histogram', 'data': histogram})}\n\n"
while True: while True:
if DECNET_DEVELOPER and max_output is not None: if DECNET_DEVELOPER and max_output is not None:
@@ -106,15 +68,7 @@ async def stream_events(
) )
if new_logs: if new_logs:
last_id = max(entry["id"] for entry in new_logs) last_id = max(entry["id"] for entry in new_logs)
# Create a span linking back to the ingestion traces yield f"event: message\ndata: {json.dumps({'type': 'logs', 'data': new_logs})}\n\n"
# stored in each log row, closing the pipeline gap.
_links = _build_trace_links(new_logs)
_tracer = _get_tracer("sse")
with _tracer.start_as_current_span(
"sse.emit_logs", links=_links,
attributes={"log_count": len(new_logs)},
):
yield f"event: message\ndata: {json.dumps({'type': 'logs', 'data': new_logs})}\n\n"
loops_since_stats = stats_interval_sec loops_since_stats = stats_interval_sec
if loops_since_stats >= stats_interval_sec: if loops_since_stats >= stats_interval_sec:
@@ -136,11 +90,4 @@ async def stream_events(
log.exception("SSE stream error for user %s", last_event_id) log.exception("SSE stream error for user %s", last_event_id)
yield f"event: error\ndata: {json.dumps({'type': 'error', 'message': 'Stream interrupted'})}\n\n" yield f"event: error\ndata: {json.dumps({'type': 'error', 'message': 'Stream interrupted'})}\n\n"
return StreamingResponse( return StreamingResponse(event_generator(), media_type="text/event-stream")
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"X-Accel-Buffering": "no",
},
)

View File

@@ -6,34 +6,18 @@ import Dashboard from './components/Dashboard';
import DeckyFleet from './components/DeckyFleet'; import DeckyFleet from './components/DeckyFleet';
import LiveLogs from './components/LiveLogs'; import LiveLogs from './components/LiveLogs';
import Attackers from './components/Attackers'; import Attackers from './components/Attackers';
import AttackerDetail from './components/AttackerDetail';
import Config from './components/Config'; import Config from './components/Config';
import Bounty from './components/Bounty'; import Bounty from './components/Bounty';
function isTokenValid(token: string): boolean {
try {
const payload = JSON.parse(atob(token.split('.')[1].replace(/-/g, '+').replace(/_/g, '/')));
return typeof payload.exp === 'number' && payload.exp * 1000 > Date.now();
} catch {
return false;
}
}
function getValidToken(): string | null {
const stored = localStorage.getItem('token');
if (stored && isTokenValid(stored)) return stored;
if (stored) localStorage.removeItem('token');
return null;
}
function App() { function App() {
const [token, setToken] = useState<string | null>(getValidToken); const [token, setToken] = useState<string | null>(localStorage.getItem('token'));
const [searchQuery, setSearchQuery] = useState(''); const [searchQuery, setSearchQuery] = useState('');
useEffect(() => { useEffect(() => {
const onAuthLogout = () => setToken(null); const savedToken = localStorage.getItem('token');
window.addEventListener('auth:logout', onAuthLogout); if (savedToken) {
return () => window.removeEventListener('auth:logout', onAuthLogout); setToken(savedToken);
}
}, []); }, []);
const handleLogin = (newToken: string) => { const handleLogin = (newToken: string) => {
@@ -62,7 +46,6 @@ function App() {
<Route path="/live-logs" element={<LiveLogs />} /> <Route path="/live-logs" element={<LiveLogs />} />
<Route path="/bounty" element={<Bounty />} /> <Route path="/bounty" element={<Bounty />} />
<Route path="/attackers" element={<Attackers />} /> <Route path="/attackers" element={<Attackers />} />
<Route path="/attackers/:id" element={<AttackerDetail />} />
<Route path="/config" element={<Config />} /> <Route path="/config" element={<Config />} />
<Route path="*" element={<Navigate to="/" replace />} /> <Route path="*" element={<Navigate to="/" replace />} />
</Routes> </Routes>

File diff suppressed because it is too large Load Diff

View File

@@ -1,264 +1,17 @@
import React, { useEffect, useState } from 'react'; import React from 'react';
import { useSearchParams, useNavigate } from 'react-router-dom'; import { Activity } from 'lucide-react';
import { Crosshair, Search, ChevronLeft, ChevronRight, Filter } from 'lucide-react';
import api from '../utils/api';
import './Dashboard.css'; import './Dashboard.css';
interface AttackerEntry {
uuid: string;
ip: string;
first_seen: string;
last_seen: string;
event_count: number;
service_count: number;
decky_count: number;
services: string[];
deckies: string[];
traversal_path: string | null;
is_traversal: boolean;
bounty_count: number;
credential_count: number;
fingerprints: any[];
commands: any[];
updated_at: string;
}
function timeAgo(dateStr: string): string {
const diff = Date.now() - new Date(dateStr).getTime();
const mins = Math.floor(diff / 60000);
if (mins < 1) return 'just now';
if (mins < 60) return `${mins}m ago`;
const hrs = Math.floor(mins / 60);
if (hrs < 24) return `${hrs}h ago`;
const days = Math.floor(hrs / 24);
return `${days}d ago`;
}
const Attackers: React.FC = () => { const Attackers: React.FC = () => {
const navigate = useNavigate();
const [searchParams, setSearchParams] = useSearchParams();
const query = searchParams.get('q') || '';
const sortBy = searchParams.get('sort_by') || 'recent';
const serviceFilter = searchParams.get('service') || '';
const page = parseInt(searchParams.get('page') || '1');
const [attackers, setAttackers] = useState<AttackerEntry[]>([]);
const [total, setTotal] = useState(0);
const [loading, setLoading] = useState(true);
const [searchInput, setSearchInput] = useState(query);
const limit = 50;
const fetchAttackers = async () => {
setLoading(true);
try {
const offset = (page - 1) * limit;
let url = `/attackers?limit=${limit}&offset=${offset}&sort_by=${sortBy}`;
if (query) url += `&search=${encodeURIComponent(query)}`;
if (serviceFilter) url += `&service=${encodeURIComponent(serviceFilter)}`;
const res = await api.get(url);
setAttackers(res.data.data);
setTotal(res.data.total);
} catch (err) {
console.error('Failed to fetch attackers', err);
} finally {
setLoading(false);
}
};
useEffect(() => {
fetchAttackers();
}, [query, sortBy, serviceFilter, page]);
const _params = (overrides: Record<string, string> = {}) => {
const base: Record<string, string> = { q: query, sort_by: sortBy, service: serviceFilter, page: '1' };
return Object.fromEntries(Object.entries({ ...base, ...overrides }).filter(([, v]) => v !== ''));
};
const handleSearch = (e: React.FormEvent) => {
e.preventDefault();
setSearchParams(_params({ q: searchInput }));
};
const setPage = (p: number) => {
setSearchParams(_params({ page: p.toString() }));
};
const setSort = (s: string) => {
setSearchParams(_params({ sort_by: s }));
};
const clearService = () => {
setSearchParams(_params({ service: '' }));
};
const totalPages = Math.ceil(total / limit);
return ( return (
<div className="dashboard"> <div className="logs-section">
{/* Page Header */} <div className="section-header">
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}> <Activity size={20} />
<div style={{ display: 'flex', alignItems: 'center', gap: '16px' }}> <h2>ATTACKER PROFILES</h2>
<Crosshair size={32} className="violet-accent" />
<h1 style={{ fontSize: '1.5rem', letterSpacing: '4px' }}>ATTACKER PROFILES</h1>
</div>
<div style={{ display: 'flex', gap: '16px', alignItems: 'center' }}>
<div style={{ display: 'flex', alignItems: 'center', gap: '8px', border: '1px solid var(--border-color)', padding: '4px 12px' }}>
<Filter size={16} className="dim" />
<select
value={sortBy}
onChange={(e) => setSort(e.target.value)}
style={{ background: 'transparent', border: 'none', color: 'inherit', fontSize: '0.8rem', outline: 'none' }}
>
<option value="recent">RECENT</option>
<option value="active">MOST ACTIVE</option>
<option value="traversals">TRAVERSALS</option>
</select>
</div>
<form onSubmit={handleSearch} style={{ display: 'flex', alignItems: 'center', border: '1px solid var(--border-color)', padding: '4px 12px' }}>
<Search size={18} style={{ opacity: 0.5, marginRight: '8px' }} />
<input
type="text"
placeholder="Search by IP..."
value={searchInput}
onChange={(e) => setSearchInput(e.target.value)}
style={{ background: 'transparent', border: 'none', padding: '4px', fontSize: '0.8rem', width: '200px' }}
/>
</form>
</div>
</div> </div>
<div style={{ padding: '40px', textAlign: 'center', opacity: 0.5 }}>
{/* Summary & Pagination */} <p>NO ACTIVE THREATS PROFILED YET.</p>
<div className="logs-section"> <p style={{ marginTop: '10px', fontSize: '0.8rem' }}>(Attackers view placeholder)</p>
<div className="section-header" style={{ justifyContent: 'space-between' }}>
<div style={{ display: 'flex', alignItems: 'center', gap: '12px' }}>
<span className="matrix-text" style={{ fontSize: '0.8rem' }}>{total} THREATS PROFILED</span>
{serviceFilter && (
<button
onClick={clearService}
style={{
display: 'inline-flex', alignItems: 'center', gap: '6px',
fontSize: '0.75rem', padding: '2px 10px', letterSpacing: '1px',
border: '1px solid var(--accent-color)', color: 'var(--accent-color)',
background: 'rgba(238, 130, 238, 0.1)', cursor: 'pointer',
}}
>
{serviceFilter.toUpperCase()} &times;
</button>
)}
</div>
<div style={{ display: 'flex', alignItems: 'center', gap: '16px' }}>
<span className="dim" style={{ fontSize: '0.8rem' }}>
Page {page} of {totalPages || 1}
</span>
<div style={{ display: 'flex', gap: '8px' }}>
<button
disabled={page <= 1}
onClick={() => setPage(page - 1)}
style={{ padding: '4px', border: '1px solid var(--border-color)', opacity: page <= 1 ? 0.3 : 1 }}
>
<ChevronLeft size={16} />
</button>
<button
disabled={page >= totalPages}
onClick={() => setPage(page + 1)}
style={{ padding: '4px', border: '1px solid var(--border-color)', opacity: page >= totalPages ? 0.3 : 1 }}
>
<ChevronRight size={16} />
</button>
</div>
</div>
</div>
{/* Card Grid */}
{loading ? (
<div style={{ textAlign: 'center', padding: '60px', opacity: 0.5, letterSpacing: '4px' }}>
SCANNING THREAT PROFILES...
</div>
) : attackers.length === 0 ? (
<div style={{ textAlign: 'center', padding: '60px', opacity: 0.5, letterSpacing: '4px' }}>
NO ACTIVE THREATS PROFILED YET
</div>
) : (
<div className="attacker-grid">
{attackers.map((a) => {
const lastCmd = a.commands.length > 0
? a.commands[a.commands.length - 1]
: null;
return (
<div
key={a.uuid}
className="attacker-card"
onClick={() => navigate(`/attackers/${a.uuid}`)}
>
{/* Header row */}
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: '12px' }}>
<span className="matrix-text" style={{ fontSize: '1.1rem', fontWeight: 'bold' }}>{a.ip}</span>
{a.is_traversal && (
<span className="traversal-badge">TRAVERSAL</span>
)}
</div>
{/* Timestamps */}
<div style={{ display: 'flex', gap: '16px', marginBottom: '8px', fontSize: '0.75rem' }}>
<span className="dim">First: {new Date(a.first_seen).toLocaleDateString()}</span>
<span className="dim">Last: {timeAgo(a.last_seen)}</span>
</div>
{/* Counts */}
<div style={{ display: 'flex', gap: '16px', marginBottom: '10px', fontSize: '0.8rem' }}>
<span>Events: <span className="matrix-text">{a.event_count}</span></span>
<span>Bounties: <span className="violet-accent">{a.bounty_count}</span></span>
<span>Creds: <span className="violet-accent">{a.credential_count}</span></span>
</div>
{/* Services */}
<div style={{ display: 'flex', flexWrap: 'wrap', gap: '4px', marginBottom: '8px' }}>
{a.services.map((svc) => (
<span
key={svc}
className="service-badge"
style={{ cursor: 'pointer' }}
onClick={(e) => { e.stopPropagation(); setSearchParams(_params({ service: svc })); }}
>
{svc.toUpperCase()}
</span>
))}
</div>
{/* Deckies / Traversal Path */}
{a.traversal_path ? (
<div style={{ fontSize: '0.75rem', marginBottom: '8px', opacity: 0.7 }}>
Path: {a.traversal_path}
</div>
) : a.deckies.length > 0 ? (
<div style={{ fontSize: '0.75rem', marginBottom: '8px', opacity: 0.7 }}>
Deckies: {a.deckies.join(', ')}
</div>
) : null}
{/* Commands & Fingerprints */}
<div style={{ display: 'flex', gap: '16px', fontSize: '0.75rem', marginBottom: '6px' }}>
<span>Cmds: <span className="matrix-text">{a.commands.length}</span></span>
<span>Fingerprints: <span className="matrix-text">{a.fingerprints.length}</span></span>
</div>
{/* Last command preview */}
{lastCmd && (
<div style={{ fontSize: '0.7rem', opacity: 0.6, overflow: 'hidden', textOverflow: 'ellipsis', whiteSpace: 'nowrap' }}>
Last cmd: <span className="matrix-text">{lastCmd.command}</span>
</div>
)}
</div>
);
})}
</div>
)}
</div> </div>
</div> </div>
); );

View File

@@ -14,118 +14,6 @@ interface BountyEntry {
payload: any; payload: any;
} }
const _FINGERPRINT_LABELS: Record<string, string> = {
fingerprint_type: 'TYPE',
ja3: 'JA3',
ja3s: 'JA3S',
ja4: 'JA4',
ja4s: 'JA4S',
ja4l: 'JA4L',
sni: 'SNI',
alpn: 'ALPN',
dst_port: 'PORT',
mechanisms: 'MECHANISM',
raw_ciphers: 'CIPHERS',
hash: 'HASH',
target_ip: 'TARGET',
target_port: 'PORT',
ssh_banner: 'BANNER',
kex_algorithms: 'KEX',
encryption_s2c: 'ENC (S→C)',
mac_s2c: 'MAC (S→C)',
compression_s2c: 'COMP (S→C)',
raw: 'RAW',
ttl: 'TTL',
window_size: 'WINDOW',
df_bit: 'DF',
mss: 'MSS',
window_scale: 'WSCALE',
sack_ok: 'SACK',
timestamp: 'TS',
options_order: 'OPTS ORDER',
};
const _TAG_STYLE: React.CSSProperties = {
fontSize: '0.65rem',
padding: '1px 6px',
borderRadius: '3px',
border: '1px solid rgba(238, 130, 238, 0.4)',
backgroundColor: 'rgba(238, 130, 238, 0.08)',
color: 'var(--accent-color)',
whiteSpace: 'nowrap',
flexShrink: 0,
};
const _HASH_STYLE: React.CSSProperties = {
fontSize: '0.75rem',
fontFamily: 'monospace',
opacity: 0.85,
wordBreak: 'break-all',
};
const FingerprintPayload: React.FC<{ payload: any }> = ({ payload }) => {
if (!payload || typeof payload !== 'object') {
return <span className="dim" style={{ fontSize: '0.8rem' }}>{JSON.stringify(payload)}</span>;
}
// For simple payloads like tls_resumption with just type + mechanism
const keys = Object.keys(payload);
const isSimple = keys.length <= 3;
if (isSimple) {
return (
<div style={{ display: 'flex', gap: '10px', alignItems: 'center', flexWrap: 'wrap' }}>
{keys.map((k) => {
const val = payload[k];
if (val === null || val === undefined) return null;
const label = _FINGERPRINT_LABELS[k] || k.toUpperCase();
return (
<span key={k} style={{ display: 'inline-flex', alignItems: 'center', gap: '5px' }}>
<span style={_TAG_STYLE}>{label}</span>
<span style={_HASH_STYLE}>{String(val)}</span>
</span>
);
})}
</div>
);
}
// Full fingerprint — show priority fields as labeled rows
const priorityKeys = ['fingerprint_type', 'ja3', 'ja3s', 'ja4', 'ja4s', 'ja4l', 'sni', 'alpn', 'dst_port', 'mechanisms', 'hash', 'target_ip', 'target_port', 'ssh_banner', 'ttl', 'window_size', 'mss', 'options_order'];
const shown = priorityKeys.filter((k) => payload[k] !== undefined && payload[k] !== null);
const rest = keys.filter((k) => !priorityKeys.includes(k) && payload[k] !== null && payload[k] !== undefined);
return (
<div style={{ display: 'flex', flexDirection: 'column', gap: '4px' }}>
{shown.map((k) => {
const label = _FINGERPRINT_LABELS[k] || k.toUpperCase();
const val = String(payload[k]);
return (
<div key={k} style={{ display: 'flex', alignItems: 'flex-start', gap: '6px' }}>
<span style={_TAG_STYLE}>{label}</span>
<span style={_HASH_STYLE}>{val}</span>
</div>
);
})}
{rest.length > 0 && (
<details style={{ marginTop: '2px' }}>
<summary className="dim" style={{ fontSize: '0.7rem', cursor: 'pointer', letterSpacing: '1px' }}>
+{rest.length} MORE FIELDS
</summary>
<div style={{ display: 'flex', flexDirection: 'column', gap: '3px', marginTop: '4px' }}>
{rest.map((k) => (
<div key={k} style={{ display: 'flex', alignItems: 'flex-start', gap: '6px' }}>
<span style={_TAG_STYLE}>{(_FINGERPRINT_LABELS[k] || k).toUpperCase()}</span>
<span style={{ ..._HASH_STYLE, fontSize: '0.7rem', opacity: 0.6 }}>{String(payload[k])}</span>
</div>
))}
</div>
</details>
)}
</div>
);
};
const Bounty: React.FC = () => { const Bounty: React.FC = () => {
const [searchParams, setSearchParams] = useSearchParams(); const [searchParams, setSearchParams] = useSearchParams();
const query = searchParams.get('q') || ''; const query = searchParams.get('q') || '';
@@ -195,7 +83,6 @@ const Bounty: React.FC = () => {
> >
<option value="">ALL TYPES</option> <option value="">ALL TYPES</option>
<option value="credential">CREDENTIALS</option> <option value="credential">CREDENTIALS</option>
<option value="fingerprint">FINGERPRINTS</option>
<option value="payload">PAYLOADS</option> <option value="payload">PAYLOADS</option>
</select> </select>
</div> </div>
@@ -280,8 +167,6 @@ const Bounty: React.FC = () => {
<span><span className="dim" style={{ marginRight: '4px' }}>user:</span>{b.payload.username}</span> <span><span className="dim" style={{ marginRight: '4px' }}>user:</span>{b.payload.username}</span>
<span><span className="dim" style={{ marginRight: '4px' }}>pass:</span>{b.payload.password}</span> <span><span className="dim" style={{ marginRight: '4px' }}>pass:</span>{b.payload.password}</span>
</div> </div>
) : b.bounty_type === 'fingerprint' ? (
<FingerprintPayload payload={b.payload} />
) : ( ) : (
<span className="dim" style={{ fontSize: '0.8rem' }}>{JSON.stringify(b.payload)}</span> <span className="dim" style={{ fontSize: '0.8rem' }}>{JSON.stringify(b.payload)}</span>
)} )}

View File

@@ -1,282 +0,0 @@
.config-page {
display: flex;
flex-direction: column;
gap: 24px;
}
.config-tabs {
display: flex;
gap: 0;
border-bottom: 1px solid var(--border-color);
background-color: var(--secondary-color);
}
.config-tab {
padding: 12px 24px;
display: flex;
align-items: center;
gap: 8px;
font-size: 0.75rem;
letter-spacing: 1.5px;
border: none;
border-bottom: 2px solid transparent;
background: transparent;
color: var(--text-color);
opacity: 0.5;
cursor: pointer;
transition: all 0.3s ease;
}
.config-tab:hover {
opacity: 0.8;
background: rgba(0, 255, 65, 0.03);
box-shadow: none;
color: var(--text-color);
}
.config-tab.active {
opacity: 1;
border-bottom-color: var(--accent-color);
color: var(--text-color);
}
.config-panel {
background-color: var(--secondary-color);
border: 1px solid var(--border-color);
padding: 32px;
}
.config-field {
display: flex;
flex-direction: column;
gap: 10px;
margin-bottom: 24px;
}
.config-field:last-child {
margin-bottom: 0;
}
.config-label {
font-size: 0.7rem;
letter-spacing: 1px;
opacity: 0.6;
}
.config-value {
font-size: 1.1rem;
padding: 8px 0;
}
.config-input-row {
display: flex;
align-items: center;
gap: 12px;
}
.config-input-row input {
width: 120px;
}
.config-input-row input[type="text"] {
width: 160px;
}
.preset-buttons {
display: flex;
gap: 8px;
}
.preset-btn {
padding: 6px 14px;
font-size: 0.75rem;
opacity: 0.7;
}
.preset-btn.active {
opacity: 1;
border-color: var(--accent-color);
color: var(--accent-color);
}
.save-btn {
padding: 8px 20px;
font-weight: bold;
letter-spacing: 1px;
display: flex;
align-items: center;
gap: 6px;
}
.save-btn:disabled {
opacity: 0.3;
cursor: not-allowed;
}
/* User Management Table */
.users-table-container {
overflow-x: auto;
margin-bottom: 24px;
}
.users-table {
width: 100%;
border-collapse: collapse;
font-size: 0.8rem;
text-align: left;
}
.users-table th {
padding: 12px 24px;
border-bottom: 1px solid var(--border-color);
opacity: 0.5;
font-weight: normal;
font-size: 0.7rem;
letter-spacing: 1px;
}
.users-table td {
padding: 12px 24px;
border-bottom: 1px solid rgba(48, 54, 61, 0.5);
}
.users-table tr:hover {
background-color: rgba(0, 255, 65, 0.03);
}
.user-actions {
display: flex;
gap: 8px;
}
.action-btn {
padding: 4px 10px;
font-size: 0.7rem;
display: flex;
align-items: center;
gap: 4px;
}
.action-btn.danger {
border-color: #ff4141;
color: #ff4141;
}
.action-btn.danger:hover {
background: #ff4141;
color: var(--background-color);
box-shadow: 0 0 10px rgba(255, 65, 65, 0.5);
}
/* Add User Form */
.add-user-section {
border-top: 1px solid var(--border-color);
padding-top: 24px;
}
.add-user-form {
display: flex;
align-items: flex-end;
gap: 16px;
flex-wrap: wrap;
}
.add-user-form .form-group {
display: flex;
flex-direction: column;
gap: 6px;
}
.add-user-form label {
font-size: 0.65rem;
letter-spacing: 1px;
opacity: 0.6;
}
.add-user-form input {
width: 180px;
}
.add-user-form select {
background: #0d1117;
border: 1px solid var(--border-color);
color: var(--text-color);
padding: 8px 12px;
font-family: inherit;
cursor: pointer;
}
.add-user-form select:focus {
outline: none;
border-color: var(--text-color);
box-shadow: var(--matrix-green-glow);
}
.role-select {
background: #0d1117;
border: 1px solid var(--border-color);
color: var(--text-color);
padding: 4px 8px;
font-family: inherit;
font-size: 0.75rem;
cursor: pointer;
}
.role-badge {
font-size: 0.7rem;
padding: 2px 8px;
border: 1px solid;
display: inline-block;
}
.role-badge.admin {
border-color: var(--accent-color);
color: var(--accent-color);
}
.role-badge.viewer {
border-color: var(--border-color);
color: var(--text-color);
opacity: 0.6;
}
.must-change-badge {
font-size: 0.65rem;
color: #ffaa00;
opacity: 0.8;
}
.config-success {
color: var(--text-color);
font-size: 0.75rem;
padding: 6px 12px;
border: 1px solid var(--text-color);
background: rgba(0, 255, 65, 0.1);
display: inline-block;
}
.config-error {
color: #ff4141;
font-size: 0.75rem;
padding: 6px 12px;
border: 1px solid #ff4141;
background: rgba(255, 65, 65, 0.1);
display: inline-block;
}
.confirm-dialog {
display: flex;
align-items: center;
gap: 8px;
font-size: 0.75rem;
}
.confirm-dialog span {
color: #ff4141;
}
.interval-hint {
font-size: 0.65rem;
opacity: 0.4;
letter-spacing: 0.5px;
}

View File

@@ -1,516 +1,18 @@
import React, { useEffect, useState } from 'react'; import React from 'react';
import api from '../utils/api'; import { Settings } from 'lucide-react';
import { Settings, Users, Sliders, Trash2, UserPlus, Key, Save, Shield, AlertTriangle } from 'lucide-react';
import './Dashboard.css'; import './Dashboard.css';
import './Config.css';
interface UserEntry {
uuid: string;
username: string;
role: string;
must_change_password: boolean;
}
interface ConfigData {
role: string;
deployment_limit: number;
global_mutation_interval: string;
users?: UserEntry[];
developer_mode?: boolean;
}
const Config: React.FC = () => { const Config: React.FC = () => {
const [config, setConfig] = useState<ConfigData | null>(null);
const [loading, setLoading] = useState(true);
const [activeTab, setActiveTab] = useState<'limits' | 'users' | 'globals'>('limits');
// Deployment limit state
const [limitInput, setLimitInput] = useState('');
const [limitSaving, setLimitSaving] = useState(false);
const [limitMsg, setLimitMsg] = useState<{ type: 'success' | 'error'; text: string } | null>(null);
// Global mutation interval state
const [intervalInput, setIntervalInput] = useState('');
const [intervalSaving, setIntervalSaving] = useState(false);
const [intervalMsg, setIntervalMsg] = useState<{ type: 'success' | 'error'; text: string } | null>(null);
// Add user form state
const [newUsername, setNewUsername] = useState('');
const [newPassword, setNewPassword] = useState('');
const [newRole, setNewRole] = useState<'admin' | 'viewer'>('viewer');
const [addingUser, setAddingUser] = useState(false);
const [userMsg, setUserMsg] = useState<{ type: 'success' | 'error'; text: string } | null>(null);
// Confirm delete state
const [confirmDelete, setConfirmDelete] = useState<string | null>(null);
// Reset password state
const [resetTarget, setResetTarget] = useState<string | null>(null);
const [resetPassword, setResetPassword] = useState('');
// Reinit state
const [confirmReinit, setConfirmReinit] = useState(false);
const [reiniting, setReiniting] = useState(false);
const [reinitMsg, setReinitMsg] = useState<{ type: 'success' | 'error'; text: string } | null>(null);
const isAdmin = config?.role === 'admin';
const fetchConfig = async () => {
try {
const res = await api.get('/config');
setConfig(res.data);
setLimitInput(String(res.data.deployment_limit));
setIntervalInput(res.data.global_mutation_interval);
} catch (err) {
console.error('Failed to fetch config', err);
} finally {
setLoading(false);
}
};
useEffect(() => {
fetchConfig();
}, []);
// If server didn't send users, force tab away from users
useEffect(() => {
if (config && !config.users && activeTab === 'users') {
setActiveTab('limits');
}
}, [config, activeTab]);
const handleSaveLimit = async () => {
const val = parseInt(limitInput);
if (isNaN(val) || val < 1 || val > 500) {
setLimitMsg({ type: 'error', text: 'VALUE MUST BE 1-500' });
return;
}
setLimitSaving(true);
setLimitMsg(null);
try {
await api.put('/config/deployment-limit', { deployment_limit: val });
setLimitMsg({ type: 'success', text: 'DEPLOYMENT LIMIT UPDATED' });
fetchConfig();
} catch (err: any) {
setLimitMsg({ type: 'error', text: err.response?.data?.detail || 'UPDATE FAILED' });
} finally {
setLimitSaving(false);
}
};
const handleSaveInterval = async () => {
if (!/^[1-9]\d*[mdMyY]$/.test(intervalInput)) {
setIntervalMsg({ type: 'error', text: 'INVALID FORMAT (e.g. 30m, 1d, 6M)' });
return;
}
setIntervalSaving(true);
setIntervalMsg(null);
try {
await api.put('/config/global-mutation-interval', { global_mutation_interval: intervalInput });
setIntervalMsg({ type: 'success', text: 'MUTATION INTERVAL UPDATED' });
fetchConfig();
} catch (err: any) {
setIntervalMsg({ type: 'error', text: err.response?.data?.detail || 'UPDATE FAILED' });
} finally {
setIntervalSaving(false);
}
};
const handleAddUser = async (e: React.FormEvent) => {
e.preventDefault();
if (!newUsername.trim() || !newPassword.trim()) return;
setAddingUser(true);
setUserMsg(null);
try {
await api.post('/config/users', {
username: newUsername.trim(),
password: newPassword,
role: newRole,
});
setNewUsername('');
setNewPassword('');
setNewRole('viewer');
setUserMsg({ type: 'success', text: 'USER CREATED' });
fetchConfig();
} catch (err: any) {
setUserMsg({ type: 'error', text: err.response?.data?.detail || 'CREATE FAILED' });
} finally {
setAddingUser(false);
}
};
const handleDeleteUser = async (uuid: string) => {
try {
await api.delete(`/config/users/${uuid}`);
setConfirmDelete(null);
fetchConfig();
} catch (err: any) {
alert(err.response?.data?.detail || 'Delete failed');
}
};
const handleRoleChange = async (uuid: string, role: string) => {
try {
await api.put(`/config/users/${uuid}/role`, { role });
fetchConfig();
} catch (err: any) {
alert(err.response?.data?.detail || 'Role update failed');
}
};
const handleResetPassword = async (uuid: string) => {
if (!resetPassword.trim() || resetPassword.length < 8) {
alert('Password must be at least 8 characters');
return;
}
try {
await api.put(`/config/users/${uuid}/reset-password`, { new_password: resetPassword });
setResetTarget(null);
setResetPassword('');
fetchConfig();
} catch (err: any) {
alert(err.response?.data?.detail || 'Password reset failed');
}
};
const handleReinit = async () => {
setReiniting(true);
setReinitMsg(null);
try {
const res = await api.delete('/config/reinit');
const d = res.data.deleted;
setReinitMsg({ type: 'success', text: `PURGED: ${d.logs} logs, ${d.bounties} bounties, ${d.attackers} attacker profiles` });
setConfirmReinit(false);
} catch (err: any) {
setReinitMsg({ type: 'error', text: err.response?.data?.detail || 'REINIT FAILED' });
} finally {
setReiniting(false);
}
};
if (loading) {
return (
<div className="logs-section">
<div className="loader">LOADING CONFIGURATION...</div>
</div>
);
}
if (!config) {
return (
<div className="logs-section">
<div style={{ padding: '40px', textAlign: 'center', opacity: 0.5 }}>
<p>FAILED TO LOAD CONFIGURATION</p>
</div>
</div>
);
}
const tabs: { key: string; label: string; icon: React.ReactNode }[] = [
{ key: 'limits', label: 'DEPLOYMENT LIMITS', icon: <Sliders size={14} /> },
...(config.users
? [{ key: 'users', label: 'USER MANAGEMENT', icon: <Users size={14} /> }]
: []),
{ key: 'globals', label: 'GLOBAL VALUES', icon: <Settings size={14} /> },
];
return ( return (
<div className="config-page"> <div className="logs-section">
<div className="logs-section"> <div className="section-header">
<div className="section-header"> <Settings size={20} />
<Shield size={20} /> <h2>SYSTEM CONFIGURATION</h2>
<h2>SYSTEM CONFIGURATION</h2>
</div>
</div> </div>
<div style={{ padding: '40px', textAlign: 'center', opacity: 0.5 }}>
<div className="config-tabs"> <p>CONFIGURATION READ-ONLY MODE ACTIVE.</p>
{tabs.map((tab) => ( <p style={{ marginTop: '10px', fontSize: '0.8rem' }}>(Config view placeholder)</p>
<button
key={tab.key}
className={`config-tab ${activeTab === tab.key ? 'active' : ''}`}
onClick={() => setActiveTab(tab.key as any)}
>
{tab.icon}
{tab.label}
</button>
))}
</div> </div>
{/* DEPLOYMENT LIMITS TAB */}
{activeTab === 'limits' && (
<div className="config-panel">
<div className="config-field">
<span className="config-label">MAXIMUM DECKIES PER DEPLOYMENT</span>
{isAdmin ? (
<>
<div className="config-input-row">
<input
type="number"
min={1}
max={500}
value={limitInput}
onChange={(e) => setLimitInput(e.target.value)}
/>
<div className="preset-buttons">
{[10, 50, 100, 200].map((v) => (
<button
key={v}
className={`preset-btn ${limitInput === String(v) ? 'active' : ''}`}
onClick={() => setLimitInput(String(v))}
>
{v}
</button>
))}
</div>
<button
className="save-btn"
onClick={handleSaveLimit}
disabled={limitSaving}
>
<Save size={14} />
{limitSaving ? 'SAVING...' : 'SAVE'}
</button>
</div>
{limitMsg && (
<span className={limitMsg.type === 'success' ? 'config-success' : 'config-error'}>
{limitMsg.text}
</span>
)}
</>
) : (
<span className="config-value">{config.deployment_limit}</span>
)}
</div>
</div>
)}
{/* USER MANAGEMENT TAB (only if server sent users) */}
{activeTab === 'users' && config.users && (
<div className="config-panel">
<div className="users-table-container">
<table className="users-table">
<thead>
<tr>
<th>USERNAME</th>
<th>ROLE</th>
<th>STATUS</th>
<th>ACTIONS</th>
</tr>
</thead>
<tbody>
{config.users.map((user) => (
<tr key={user.uuid}>
<td>{user.username}</td>
<td>
<span className={`role-badge ${user.role}`}>{user.role.toUpperCase()}</span>
</td>
<td>
{user.must_change_password && (
<span className="must-change-badge">MUST CHANGE PASSWORD</span>
)}
</td>
<td>
<div className="user-actions">
{/* Role change dropdown */}
<select
className="role-select"
value={user.role}
onChange={(e) => handleRoleChange(user.uuid, e.target.value)}
>
<option value="admin">admin</option>
<option value="viewer">viewer</option>
</select>
{/* Reset password */}
{resetTarget === user.uuid ? (
<div className="confirm-dialog">
<input
type="password"
placeholder="New password"
value={resetPassword}
onChange={(e) => setResetPassword(e.target.value)}
style={{ width: '140px' }}
/>
<button className="action-btn" onClick={() => handleResetPassword(user.uuid)}>
SET
</button>
<button className="action-btn" onClick={() => { setResetTarget(null); setResetPassword(''); }}>
CANCEL
</button>
</div>
) : (
<button
className="action-btn"
onClick={() => setResetTarget(user.uuid)}
>
<Key size={12} />
RESET
</button>
)}
{/* Delete */}
{confirmDelete === user.uuid ? (
<div className="confirm-dialog">
<span>CONFIRM?</span>
<button className="action-btn danger" onClick={() => handleDeleteUser(user.uuid)}>
YES
</button>
<button className="action-btn" onClick={() => setConfirmDelete(null)}>
NO
</button>
</div>
) : (
<button
className="action-btn danger"
onClick={() => setConfirmDelete(user.uuid)}
>
<Trash2 size={12} />
DELETE
</button>
)}
</div>
</td>
</tr>
))}
</tbody>
</table>
</div>
<div className="add-user-section">
<form className="add-user-form" onSubmit={handleAddUser}>
<div className="form-group">
<label>USERNAME</label>
<input
type="text"
value={newUsername}
onChange={(e) => setNewUsername(e.target.value)}
required
minLength={1}
maxLength={64}
/>
</div>
<div className="form-group">
<label>PASSWORD</label>
<input
type="password"
value={newPassword}
onChange={(e) => setNewPassword(e.target.value)}
required
minLength={8}
maxLength={72}
/>
</div>
<div className="form-group">
<label>ROLE</label>
<select
value={newRole}
onChange={(e) => setNewRole(e.target.value as 'admin' | 'viewer')}
>
<option value="viewer">viewer</option>
<option value="admin">admin</option>
</select>
</div>
<button type="submit" className="save-btn" disabled={addingUser}>
<UserPlus size={14} />
{addingUser ? 'CREATING...' : 'ADD USER'}
</button>
{userMsg && (
<span className={userMsg.type === 'success' ? 'config-success' : 'config-error'}>
{userMsg.text}
</span>
)}
</form>
</div>
</div>
)}
{/* GLOBAL VALUES TAB */}
{activeTab === 'globals' && (
<div className="config-panel">
<div className="config-field">
<span className="config-label">GLOBAL MUTATION INTERVAL</span>
{isAdmin ? (
<>
<div className="config-input-row">
<input
type="text"
value={intervalInput}
onChange={(e) => setIntervalInput(e.target.value)}
placeholder="30m"
/>
<button
className="save-btn"
onClick={handleSaveInterval}
disabled={intervalSaving}
>
<Save size={14} />
{intervalSaving ? 'SAVING...' : 'SAVE'}
</button>
</div>
<span className="interval-hint">
FORMAT: &lt;number&gt;&lt;unit&gt; m=minutes, d=days, M=months, y=years (e.g. 30m, 7d, 1M)
</span>
{intervalMsg && (
<span className={intervalMsg.type === 'success' ? 'config-success' : 'config-error'}>
{intervalMsg.text}
</span>
)}
</>
) : (
<span className="config-value">{config.global_mutation_interval}</span>
)}
</div>
</div>
)}
{/* DANGER ZONE — developer mode only, server-gated, shown on globals tab */}
{activeTab === 'globals' && config.developer_mode && (
<div className="config-panel" style={{ borderColor: '#ff4141' }}>
<div className="config-field" style={{ marginBottom: 0 }}>
<span className="config-label" style={{ color: '#ff4141' }}>
<AlertTriangle size={12} style={{ display: 'inline', verticalAlign: 'middle', marginRight: '6px' }} />
DANGER ZONE DEVELOPER MODE
</span>
<p style={{ fontSize: '0.75rem', opacity: 0.5, margin: '4px 0 12px' }}>
Purge all logs, bounty vault entries, and attacker profiles. This action is irreversible.
</p>
{!confirmReinit ? (
<button
className="action-btn danger"
onClick={() => setConfirmReinit(true)}
style={{ padding: '8px 16px', fontSize: '0.8rem' }}
>
<Trash2 size={14} />
PURGE ALL DATA
</button>
) : (
<div className="confirm-dialog">
<span>THIS WILL DELETE ALL COLLECTED DATA. ARE YOU SURE?</span>
<button
className="action-btn danger"
onClick={handleReinit}
disabled={reiniting}
style={{ padding: '6px 16px' }}
>
{reiniting ? 'PURGING...' : 'YES, PURGE'}
</button>
<button
className="action-btn"
onClick={() => setConfirmReinit(false)}
style={{ padding: '6px 16px' }}
>
CANCEL
</button>
</div>
)}
{reinitMsg && (
<span className={reinitMsg.type === 'success' ? 'config-success' : 'config-error'} style={{ marginTop: '8px' }}>
{reinitMsg.text}
</span>
)}
</div>
</div>
)}
</div> </div>
); );
}; };

View File

@@ -127,96 +127,3 @@
from { transform: rotate(0deg); } from { transform: rotate(0deg); }
to { transform: rotate(360deg); } to { transform: rotate(360deg); }
} }
/* Attacker Profiles */
.attacker-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(340px, 1fr));
gap: 16px;
padding: 16px;
}
.attacker-card {
background: var(--secondary-color);
border: 1px solid var(--border-color);
padding: 16px;
cursor: pointer;
transition: transform 0.15s ease, box-shadow 0.15s ease, border-color 0.15s ease;
}
.attacker-card:hover {
transform: translateY(-2px);
border-color: var(--text-color);
box-shadow: var(--matrix-green-glow);
}
.traversal-badge {
font-size: 0.65rem;
padding: 2px 8px;
border: 1px solid var(--accent-color);
background: rgba(238, 130, 238, 0.1);
color: var(--accent-color);
letter-spacing: 2px;
}
.service-badge {
font-size: 0.7rem;
padding: 2px 8px;
border: 1px solid var(--text-color);
background: rgba(0, 255, 65, 0.05);
color: var(--text-color);
}
.back-button {
display: inline-flex;
align-items: center;
gap: 8px;
padding: 8px 16px;
border: 1px solid var(--border-color);
background: transparent;
color: var(--text-color);
cursor: pointer;
font-size: 0.8rem;
letter-spacing: 2px;
transition: border-color 0.15s ease, box-shadow 0.15s ease;
}
.back-button:hover {
border-color: var(--text-color);
box-shadow: var(--matrix-green-glow);
}
/* Fingerprint cards */
.fp-card {
border: 1px solid var(--border-color);
background: rgba(0, 0, 0, 0.2);
transition: border-color 0.15s ease;
}
.fp-card:hover {
border-color: var(--accent-color);
}
.fp-card-header {
display: flex;
align-items: center;
gap: 8px;
padding: 8px 16px;
border-bottom: 1px solid var(--border-color);
}
.fp-card-icon {
color: var(--accent-color);
display: flex;
align-items: center;
}
.fp-card-label {
font-size: 0.7rem;
letter-spacing: 2px;
opacity: 0.7;
}
.fp-card-body {
padding: 12px 16px;
}

View File

@@ -1,4 +1,4 @@
import React, { useEffect, useState, useRef } from 'react'; import React, { useEffect, useState } from 'react';
import './Dashboard.css'; import './Dashboard.css';
import { Shield, Users, Activity, Clock } from 'lucide-react'; import { Shield, Users, Activity, Clock } from 'lucide-react';
@@ -29,52 +29,37 @@ const Dashboard: React.FC<DashboardProps> = ({ searchQuery }) => {
const [stats, setStats] = useState<Stats | null>(null); const [stats, setStats] = useState<Stats | null>(null);
const [logs, setLogs] = useState<LogEntry[]>([]); const [logs, setLogs] = useState<LogEntry[]>([]);
const [loading, setLoading] = useState(true); const [loading, setLoading] = useState(true);
const eventSourceRef = useRef<EventSource | null>(null);
const reconnectTimerRef = useRef<ReturnType<typeof setTimeout> | null>(null);
useEffect(() => { useEffect(() => {
const connect = () => { const token = localStorage.getItem('token');
if (eventSourceRef.current) { const baseUrl = import.meta.env.VITE_API_URL || 'http://localhost:8000/api/v1';
eventSourceRef.current.close(); let url = `${baseUrl}/stream?token=${token}`;
} if (searchQuery) {
url += `&search=${encodeURIComponent(searchQuery)}`;
}
const token = localStorage.getItem('token'); const eventSource = new EventSource(url);
const baseUrl = import.meta.env.VITE_API_URL || 'http://localhost:8000/api/v1';
let url = `${baseUrl}/stream?token=${token}`;
if (searchQuery) {
url += `&search=${encodeURIComponent(searchQuery)}`;
}
const es = new EventSource(url); eventSource.onmessage = (event) => {
eventSourceRef.current = es; try {
const payload = JSON.parse(event.data);
es.onmessage = (event) => { if (payload.type === 'logs') {
try { setLogs(prev => [...payload.data, ...prev].slice(0, 100));
const payload = JSON.parse(event.data); } else if (payload.type === 'stats') {
if (payload.type === 'logs') { setStats(payload.data);
setLogs(prev => [...payload.data, ...prev].slice(0, 100)); setLoading(false);
} else if (payload.type === 'stats') {
setStats(payload.data);
setLoading(false);
window.dispatchEvent(new CustomEvent('decnet:stats', { detail: payload.data }));
}
} catch (err) {
console.error('Failed to parse SSE payload', err);
} }
}; } catch (err) {
console.error('Failed to parse SSE payload', err);
es.onerror = () => { }
es.close();
eventSourceRef.current = null;
reconnectTimerRef.current = setTimeout(connect, 3000);
};
}; };
connect(); eventSource.onerror = (err) => {
console.error('SSE connection error, attempting to reconnect...', err);
};
return () => { return () => {
if (reconnectTimerRef.current) clearTimeout(reconnectTimerRef.current); eventSource.close();
if (eventSourceRef.current) eventSourceRef.current.close();
}; };
}, [searchQuery]); }, [searchQuery]);

View File

@@ -22,7 +22,6 @@ const DeckyFleet: React.FC = () => {
const [showDeploy, setShowDeploy] = useState(false); const [showDeploy, setShowDeploy] = useState(false);
const [iniContent, setIniContent] = useState(''); const [iniContent, setIniContent] = useState('');
const [deploying, setDeploying] = useState(false); const [deploying, setDeploying] = useState(false);
const [isAdmin, setIsAdmin] = useState(false);
const fetchDeckies = async () => { const fetchDeckies = async () => {
try { try {
@@ -35,15 +34,6 @@ const DeckyFleet: React.FC = () => {
} }
}; };
const fetchRole = async () => {
try {
const res = await api.get('/config');
setIsAdmin(res.data.role === 'admin');
} catch {
setIsAdmin(false);
}
};
const handleMutate = async (name: string) => { const handleMutate = async (name: string) => {
setMutating(name); setMutating(name);
try { try {
@@ -104,7 +94,6 @@ const DeckyFleet: React.FC = () => {
useEffect(() => { useEffect(() => {
fetchDeckies(); fetchDeckies();
fetchRole();
const _interval = setInterval(fetchDeckies, 10000); // Fleet state updates less frequently than logs const _interval = setInterval(fetchDeckies, 10000); // Fleet state updates less frequently than logs
return () => clearInterval(_interval); return () => clearInterval(_interval);
}, []); }, []);
@@ -118,14 +107,12 @@ const DeckyFleet: React.FC = () => {
<Server size={20} /> <Server size={20} />
<h2 style={{ margin: 0 }}>DECOY FLEET ASSET INVENTORY</h2> <h2 style={{ margin: 0 }}>DECOY FLEET ASSET INVENTORY</h2>
</div> </div>
{isAdmin && ( <button
<button onClick={() => setShowDeploy(!showDeploy)}
onClick={() => setShowDeploy(!showDeploy)} style={{ display: 'flex', alignItems: 'center', gap: '8px', border: '1px solid var(--accent-color)', color: 'var(--accent-color)' }}
style={{ display: 'flex', alignItems: 'center', gap: '8px', border: '1px solid var(--accent-color)', color: 'var(--accent-color)' }} >
> + DEPLOY DECKIES
+ DEPLOY DECKIES </button>
</button>
)}
</div> </div>
{showDeploy && ( {showDeploy && (
@@ -199,32 +186,24 @@ const DeckyFleet: React.FC = () => {
<div style={{ display: 'flex', alignItems: 'center', gap: '8px', fontSize: '0.85rem', marginTop: '8px' }}> <div style={{ display: 'flex', alignItems: 'center', gap: '8px', fontSize: '0.85rem', marginTop: '8px' }}>
<Clock size={14} className="dim" /> <Clock size={14} className="dim" />
<span className="dim">MUTATION:</span> <span className="dim">MUTATION:</span>
{isAdmin ? ( <span
<span style={{ color: 'var(--accent-color)', cursor: 'pointer', textDecoration: 'underline' }}
style={{ color: 'var(--accent-color)', cursor: 'pointer', textDecoration: 'underline' }} onClick={() => handleIntervalChange(decky.name, decky.mutate_interval)}
onClick={() => handleIntervalChange(decky.name, decky.mutate_interval)} >
> {decky.mutate_interval ? `EVERY ${decky.mutate_interval}m` : 'DISABLED'}
{decky.mutate_interval ? `EVERY ${decky.mutate_interval}m` : 'DISABLED'} </span>
</span> <button
) : ( onClick={() => handleMutate(decky.name)}
<span style={{ color: 'var(--accent-color)' }}> disabled={!!mutating}
{decky.mutate_interval ? `EVERY ${decky.mutate_interval}m` : 'DISABLED'} style={{
</span> background: 'transparent', border: '1px solid var(--accent-color)',
)} color: 'var(--accent-color)', padding: '2px 8px', fontSize: '0.7rem',
{isAdmin && ( cursor: mutating ? 'not-allowed' : 'pointer', display: 'flex', alignItems: 'center', gap: '4px', marginLeft: 'auto',
<button opacity: mutating ? 0.5 : 1
onClick={() => handleMutate(decky.name)} }}
disabled={!!mutating} >
style={{ <RefreshCw size={10} className={mutating === decky.name ? "spin" : ""} /> {mutating === decky.name ? 'MUTATING...' : 'FORCE'}
background: 'transparent', border: '1px solid var(--accent-color)', </button>
color: 'var(--accent-color)', padding: '2px 8px', fontSize: '0.7rem',
cursor: mutating ? 'not-allowed' : 'pointer', display: 'flex', alignItems: 'center', gap: '4px', marginLeft: 'auto',
opacity: mutating ? 0.5 : 1
}}
>
<RefreshCw size={10} className={mutating === decky.name ? "spin" : ""} /> {mutating === decky.name ? 'MUTATING...' : 'FORCE'}
</button>
)}
</div> </div>
{decky.last_mutated > 0 && ( {decky.last_mutated > 0 && (
<div style={{ fontSize: '0.7rem', color: 'var(--dim-color)', fontStyle: 'italic', marginTop: '4px' }}> <div style={{ fontSize: '0.7rem', color: 'var(--dim-color)', fontStyle: 'italic', marginTop: '4px' }}>

View File

@@ -1,6 +1,7 @@
import React, { useState, useEffect } from 'react'; import React, { useState, useEffect } from 'react';
import { NavLink } from 'react-router-dom'; import { NavLink } from 'react-router-dom';
import { Menu, X, Search, Activity, LayoutDashboard, Terminal, Settings, LogOut, Server, Archive } from 'lucide-react'; import { Menu, X, Search, Activity, LayoutDashboard, Terminal, Settings, LogOut, Server, Archive } from 'lucide-react';
import api from '../utils/api';
import './Layout.css'; import './Layout.css';
interface LayoutProps { interface LayoutProps {
@@ -20,12 +21,17 @@ const Layout: React.FC<LayoutProps> = ({ children, onLogout, onSearch }) => {
}; };
useEffect(() => { useEffect(() => {
const onStats = (e: Event) => { const fetchStatus = async () => {
const stats = (e as CustomEvent).detail; try {
setSystemActive(stats.deployed_deckies > 0); const res = await api.get('/stats');
setSystemActive(res.data.deployed_deckies > 0);
} catch (err) {
console.error('Failed to fetch system status', err);
}
}; };
window.addEventListener('decnet:stats', onStats); fetchStatus();
return () => window.removeEventListener('decnet:stats', onStats); const interval = setInterval(fetchStatus, 10000);
return () => clearInterval(interval);
}, []); }, []);
return ( return (

View File

@@ -12,15 +12,4 @@ api.interceptors.request.use((config) => {
return config; return config;
}); });
api.interceptors.response.use(
(response) => response,
(error) => {
if (error.response?.status === 401) {
localStorage.removeItem('token');
window.dispatchEvent(new Event('auth:logout'));
}
return Promise.reject(error);
}
);
export default api; export default api;

View File

@@ -4,12 +4,4 @@ import react from '@vitejs/plugin-react'
// https://vite.dev/config/ // https://vite.dev/config/
export default defineConfig({ export default defineConfig({
plugins: [react()], plugins: [react()],
server: {
proxy: {
'/api': {
target: 'http://127.0.0.1:8000',
changeOrigin: true,
},
},
},
}) })

View File

@@ -45,7 +45,7 @@
## Core / Hardening ## Core / Hardening
- [~] **Attacker fingerprinting**HTTP User-Agent, VNC client version stored as `fingerprint` bounties. JA3/JA3S in progress (sniffer container). HASSH, JA4+, TCP stack, JARM planned (see Attacker Intelligence section). - [ ] **Attacker fingerprinting**Capture TLS JA3/JA4 hashes, TCP window sizes, User-Agent strings, and SSH client banners.
- [ ] **Canary tokens** — Embed fake AWS keys and honeydocs into decky filesystems. - [ ] **Canary tokens** — Embed fake AWS keys and honeydocs into decky filesystems.
- [ ] **Tarpit mode** — Slow down attackers by drip-feeding bytes or delaying responses. - [ ] **Tarpit mode** — Slow down attackers by drip-feeding bytes or delaying responses.
- [x] **Dynamic decky mutation** — Rotate exposed services or OS fingerprints over time. - [x] **Dynamic decky mutation** — Rotate exposed services or OS fingerprints over time.
@@ -66,7 +66,7 @@
- [x] **Web dashboard** — Real-time React SPA + FastAPI backend for logs and fleet status. - [x] **Web dashboard** — Real-time React SPA + FastAPI backend for logs and fleet status.
- [x] **Decky Inventory** — Dedicated "Decoy Fleet" page showing all deployed assets. - [x] **Decky Inventory** — Dedicated "Decoy Fleet" page showing all deployed assets.
- [ ] **Pre-built Kibana/Grafana dashboards** — Ship JSON exports for ELK/Grafana. - [ ] **Pre-built Kibana/Grafana dashboards** — Ship JSON exports for ELK/Grafana.
- [~] **CLI live feed**`decnet watch` — WON'T IMPLEMENT: redundant with `tail -f` on the existing log file; adds bloat without meaningful value. - [ ] **CLI live feed**`decnet watch` command for a unified, colored terminal stream.
- [x] **Traversal graph export** — Export attacker movement as JSON (via CLI). - [x] **Traversal graph export** — Export attacker movement as JSON (via CLI).
## Deployment & Infrastructure ## Deployment & Infrastructure
@@ -84,55 +84,6 @@
- [ ] **Realistic web apps** — Fake WordPress, Grafana, and phpMyAdmin templates. - [ ] **Realistic web apps** — Fake WordPress, Grafana, and phpMyAdmin templates.
- [ ] **OT/ICS profiles** — Expanded Modbus, DNP3, and BACnet support. - [ ] **OT/ICS profiles** — Expanded Modbus, DNP3, and BACnet support.
## Attacker Intelligence Collection
*Goal: Build the richest possible attacker profile from passive observation across all 26 services.*
### TLS/SSL Fingerprinting (via sniffer container)
- [x] **JA3/JA3S** — TLS ClientHello/ServerHello fingerprint hashes
- [x] **JA4+ family** — JA4, JA4S, JA4H, JA4L (latency/geo estimation via RTT)
- [x] **JARM** — Active server fingerprint; identifies C2 framework from TLS server behavior
- [~] **CYU** — Citrix-specific TLS fingerprint: WILL NOT implement pre-v1. Don't have that kind of data.
- [x] **TLS session resumption behavior** — Identifies tooling by how it handles session tickets
- [x] **Certificate details** — CN, SANs, issuer, validity period, self-signed flag (attacker-run servers)
### Timing & Behavioral
- [ ] **Inter-packet arrival times** — OS TCP stack fingerprint + beaconing interval detection
- [ ] **TTL values** — Rough OS / hop-distance inference
- [ ] **TCP window size & scaling** — p0f-style OS fingerprinting
- [ ] **Retransmission patterns** — Identify lossy paths / throttled connections
- [ ] **Beacon jitter variance** — Attribute tooling: Cobalt Strike vs. Sliver vs. Havoc have distinct profiles
- [ ] **C2 check-in cadence** — Detect beaconing vs. interactive sessions
- [ ] **Data exfil timing** — Behavioral sequencing relative to recon phase
### Protocol Fingerprinting
- [ ] **TCP/IP stack** — ISN patterns, DF bit, ToS/DSCP, IP ID sequence (random/incremental/zero)
- [ ] **HASSH / HASSHServer** — SSH KEX algo, cipher, MAC order → tool fingerprint
- [ ] **HTTP/2 fingerprint** — GREASE values, settings frame order, header pseudo-field ordering
- [ ] **QUIC fingerprint** — Connection ID length, transport parameters order
- [ ] **DNS behavior** — Query patterns, recursion flags, EDNS0 options, resolver fingerprint
- [ ] **HTTP header ordering** — Tool-specific capitalization and ordering quirks
### Network Topology Leakage
- [ ] **X-Forwarded-For mismatches** — Detect VPN/proxy slip vs. actual source IP
- [ ] **ICMP error messages** — Internal IP leakage from misconfigured attacker infra
- [ ] **IPv6 link-local leakage** — IPv6 addrs leaked even over IPv4 VPN (common opsec fail)
- [ ] **mDNS/LLMNR leakage** — Attacker hostname/device info from misconfigured systems
### Geolocation & Infrastructure
- [ ] **ASN lookup** — Source IP autonomous system number and org name
- [ ] **BGP prefix / RPKI validity** — Route origin legitimacy
- [ ] **PTR records** — rDNS for attacker IPs (catches infra with forgotten reverse DNS)
- [ ] **Latency triangulation** — JA4L RTT estimates for rough geolocation
### Service-Level Behavioral Profiling
- [ ] **Commands executed** — Full command log per session (SSH, Telnet, FTP, Redis, DB services)
- [ ] **Services actively interacted with** — Distinguish port scans from live exploitation attempts
- [ ] **Tooling attribution** — Byte-sequence signatures from known C2 frameworks in handshakes
- [ ] **Credential reuse patterns** — Same username/password tried across multiple deckies/services
- [ ] **Payload signatures** — Hash and classify uploaded files, shellcode, exploit payloads
---
## Developer Experience ## Developer Experience
- [x] **API Fuzzing** — Property-based testing for all web endpoints. - [x] **API Fuzzing** — Property-based testing for all web endpoints.

View File

@@ -1,20 +0,0 @@
# DECNET OpenTelemetry development stack.
#
# Start: docker compose -f development/docker-compose.otel.yml up -d
# UI: http://localhost:16686 (Jaeger)
# Stop: docker compose -f development/docker-compose.otel.yml down
#
# Then run DECNET with tracing enabled:
# DECNET_DEVELOPER_TRACING=true decnet web
services:
jaeger:
image: jaegertracing/all-in-one:latest
container_name: decnet-jaeger
restart: unless-stopped
ports:
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
- "16686:16686" # Jaeger UI
environment:
COLLECTOR_OTLP_ENABLED: "true"

View File

@@ -1,153 +0,0 @@
# DECNET Technical Architecture: Deep Dive
This document provides a low-level technical decomposition of the DECNET (Deception Network) framework. It covers the internal orchestration logic, networking internals, reactive data pipelines, and the persistent intelligence schema.
---
## 1. System Topology & Micro-Services
DECNET is architected as a set of decoupled "engines" that interact via a persistent shared repository (SQLite/MySQL) and the Docker socket.
### Component Connectivity Graph
```mermaid
graph TD
subgraph "Infrastructure Layer"
DK[Docker Engine]
MV[MACVLAN / IPvlan Driver]
end
subgraph "Identity Layer (Deckies)"
B1[Base Container 01]
S1a[Service: SSH]
S1b[Service: HTTP]
B1 --- S1a
B1 --- S1b
end
subgraph "Telemetry Layer"
SNF[Sniffer Worker]
COL[Log Collector]
end
subgraph "Processing Layer"
ING[Log Ingester]
PROF[Attacker Profiler]
end
subgraph "Persistence Layer"
DB[(SQLModel Repository)]
ST[decnet-state.json]
end
DK --- MV
MV --- B1
S1a -- "stdout/stderr" --> COL
S1b -- "stdout/stderr" --> COL
SNF -- "PCAP Analysis" --> COL
COL -- "JSON Tail" --> ING
ING -- "Bounty Extraction" --> DB
ING -- "Log Commit" --> DB
DB -- "Log Cursor" --> PROF
PROF -- "Correlation Engine" --> DB
PROF -- "Behavior Rollup" --> DB
ING -- "Events" --> WS[Web Dashboard / SSE]
```
---
## 2. Core Orchestration: The "Decky" Lifecycle
A **Decky** is a logical entity represented by a shared network namespace.
### The Deployment Flow (`decnet deploy`)
1. **Configuration Parsing**: `DecnetConfig` (via `ini_loader.py`) validates the archetypes and service counts.
2. **IP Allocation**: `ips_to_range()` calculates the minimal CIDR covering all requested IPs to prevent exhaustion of the host's subnet.
3. **Network Setup**:
- Calls `docker network create -d macvlan --parent eth0`.
- Creates a host-side bridge (`decnet_macvlan0`) to fix the Linux bridge isolation issue (hairpin fix).
4. **Logging Injection**: Every service container has `decnet_logging.py` injected into its build context to ensure uniform RFC 5424 syslog output.
5. **Compose Generation**: `write_compose()` creates a dynamic `docker-compose.yml` where:
- Service containers use `network_mode: "service:<base_container_name>"`.
- Base containers use `sysctls` derived from `os_fingerprint.py`.
### Teardown & State
Runtime state is persisted in `decnet-state.json`. Upon `teardown`, DECNET:
1. Runs `docker compose down`.
2. Deletes the host-side macvlan interface and routes.
3. Removes the Docker network.
4. Clears the CLI state.
---
## 3. Networking Internals: Passive & Active Fidelity
### OS Fingerprinting (TCP/IP Spoofing)
DECNET tunes the networking behavior of each Decky within its own namespace. This is handled by the `os_fingerprint.py` module, which sets specific `sysctls` in the base container:
- `net.ipv4.tcp_window_scaling`: Enables/disables based on OS profile.
- `net.ipv4.tcp_timestamps`: Mimics specific OS tendencies (e.g., Windows vs. Linux).
- `net.ipv4.tcp_syncookies`: Prevents OS detection via SYN-flood response patterns.
### The Packet Flow
1. **Ingress**: Packet hits physical NIC -> MACVLAN Bridge -> Target Decky Namespace.
2. **Telemetry**: The `Sniffer` container attaches to the same MACVLAN bridge in promiscuous mode. It uses scapy-like logic (via `decnet.sniffer`) to extract:
- **JA3/JA4**: TLS ClientHello fingerprints.
- **HASSH**: SSH Key Exchange fingerprints.
- **JARM**: (Triggered actively) TLS server fingerprints.
---
## 4. Persistent Intelligence: Database Schema
DECNET uses an asynchronous SQLModel-based repository. The schema is optimized for both high-speed ingestion and complex behavioral correlation.
### Entity Relationship Model
| Table | Purpose | Key Fields |
| :--- | :--- | :--- |
| **logs** | Raw event stream | `id`, `timestamp`, `decky`, `service`, `event_type`, `attacker_ip`, `fields` |
| **bounty** | Harvested artifacts | `id`, `bounty_type`, `payload` (JSON), `attacker_ip` |
| **attackers** | Aggregated profiles | `uuid`, `ip`, `is_traversal`, `traversal_path`, `fingerprints` (JSON), `commands` (JSON) |
| **attacker_behavior** | behavioral profile | `attacker_uuid`, `os_guess`, `behavior_class`, `tool_guesses` (JSON), `timing_stats` (JSON) |
### JSON Logic
To maintain portability across SQLite/MySQL, DECNET uses the `JSON_EXTRACT` function for filtering logs by internal fields (e.g., searching for a specific HTTP User-Agent inside the `fields` column).
---
## 5. Reactive Processing: The Internal Pipeline
### Log Ingestion & Bounty Extraction
1. **Tailer**: `log_ingestion_worker` tails the JSON log stream.
2. **.JSON Parsing**: Every line is validated against the RFC 5424 mapping.
3. **Extraction Logic**:
- If `event_type == "credential"`, a row is added to the `bounty` table.
- If `ja3` field exists, a `fingerprint` bounty is created.
4. **Notification**: Logs are dispatched to active WebSocket/SSE clients for real-time visualization.
### Correlation & Traversal Logic
The `CorrelationEngine` processes logs in batches:
- **IP Grouping**: Logs are indexed by `attacker_ip`.
- **Hop Extraction**: The engine identifies distinct `deckies` touched by the same IP.
- **Path Calculation**: A chronological string (`decky-A -> decky-B`) is built to visualize the attack progression.
- **Attacker Profile Upsert**: The `Attacker` table is updated with the new counts, path, and consolidated bounty history.
---
## 6. Service Plugin Architecture
Adding a new honeypot service is zero-configuration. The `decnet/services/registry.py` uses `pkgutil.iter_modules` to auto-discover any file in the `services/` directory.
### `BaseService` Interface
Every service must implement:
- `name`: Unique identifier (e.g., "ssh").
- `ports`: Targeted ports (e.g., `22/tcp`).
- `dockerfile_context()`: Path to the template directory.
- `compose_service(name, base_name)`: Returns the Docker Compose fragment.
### Templates
Templates (found in `/templates/`) contain the Dockerfile and entrypoint. The `deployer` automatically syncs `decnet_logging.py` into these contexts during build time to ensure logs are streamed correctly to the host.

View File

@@ -1,219 +0,0 @@
# Distributed Tracing
OpenTelemetry (OTEL) distributed tracing across all DECNET services. Gated by the `DECNET_DEVELOPER_TRACING` environment variable (off by default). When disabled, zero overhead: no OTEL imports occur, `@traced` returns the original unwrapped function, and no middleware is installed.
## Quick Start
```bash
# 1. Start Jaeger (OTLP receiver on :4317, UI on :16686)
docker compose -f development/docker-compose.otel.yml up -d
# 2. Run DECNET with tracing enabled
DECNET_DEVELOPER_TRACING=true decnet web
# 3. Open Jaeger UI — service name is "decnet"
open http://localhost:16686
```
| Variable | Default | Purpose |
|----------|---------|---------|
| `DECNET_DEVELOPER_TRACING` | `false` | Enable/disable all tracing |
| `DECNET_OTEL_ENDPOINT` | `http://localhost:4317` | OTLP gRPC exporter target |
## Architecture
The core module is `decnet/telemetry.py`. All tracing flows through it.
| Export | Purpose |
|--------|---------|
| `setup_tracing(app)` | Init TracerProvider, instrument FastAPI, enable log-trace correlation |
| `shutdown_tracing()` | Flush and shut down the TracerProvider |
| `get_tracer(component)` | Return an OTEL Tracer or `_NoOpTracer` when disabled |
| `@traced(name)` | Decorator wrapping sync/async functions in spans (no-op when disabled) |
| `wrap_repository(repo)` | Dynamic `__getattr__` proxy adding `db.*` spans to every async method |
| `inject_context(record)` | Embed W3C trace context into a JSON record under `_trace` |
| `extract_context(record)` | Recover trace context from `_trace` and remove it from the record |
| `start_span_with_context(tracer, name, ctx)` | Start a span as child of an extracted context |
**TracerProvider config**: Resource(`service.name=decnet`, `service.version=0.2.0`), `BatchSpanProcessor`, OTLP gRPC exporter.
**When disabled**: `_NoOpTracer` and `_NoOpSpan` stubs are returned. No OTEL SDK packages are imported. The `@traced` decorator returns the original function object at decoration time.
## Pipeline Trace Propagation
The DECNET data pipeline is decoupled through JSON files and the database, which normally breaks trace continuity. Four mechanisms bridge the gaps:
1. **Collector → JSON**: `inject_context()` embeds W3C `traceparent`/`tracestate` into each JSON log record under a `_trace` key.
2. **JSON → Ingester**: `extract_context()` recovers the parent context. The ingester creates `ingester.process_record` as a child span, preserving the collector→ingester parent-child relationship.
3. **Ingester → DB**: The ingester persists the current span's `trace_id` and `span_id` as columns on the `logs` table before calling `repo.add_log()`.
4. **DB → SSE**: The SSE endpoint reads `trace_id`/`span_id` from log rows and creates OTEL **span links** (FOLLOWS_FROM) on `sse.emit_logs`, connecting the read path back to the original ingestion traces.
**Log-trace correlation**: `_TraceContextFilter` (installed by `enable_trace_context()`) injects `otel_trace_id` and `otel_span_id` into Python `LogRecord` objects, bridging structured logs with trace context.
## Span Reference
### API Endpoints (20 spans)
| Span | Endpoint |
|------|----------|
| `api.login` | `POST /auth/login` |
| `api.change_password` | `POST /auth/change-password` |
| `api.get_logs` | `GET /logs` |
| `api.get_logs_histogram` | `GET /logs/histogram` |
| `api.get_bounties` | `GET /bounty` |
| `api.get_attackers` | `GET /attackers` |
| `api.get_attacker_detail` | `GET /attackers/{uuid}` |
| `api.get_attacker_commands` | `GET /attackers/{uuid}/commands` |
| `api.get_stats` | `GET /stats` |
| `api.get_deckies` | `GET /fleet/deckies` |
| `api.deploy_deckies` | `POST /fleet/deploy` |
| `api.mutate_decky` | `POST /fleet/mutate/{decky_id}` |
| `api.update_mutate_interval` | `POST /fleet/mutate-interval/{decky_id}` |
| `api.get_config` | `GET /config` |
| `api.update_deployment_limit` | `PUT /config/deployment-limit` |
| `api.update_global_mutation_interval` | `PUT /config/global-mutation-interval` |
| `api.create_user` | `POST /config/users` |
| `api.delete_user` | `DELETE /config/users/{uuid}` |
| `api.update_user_role` | `PUT /config/users/{uuid}/role` |
| `api.reset_user_password` | `PUT /config/users/{uuid}/password` |
| `api.reinit` | `POST /config/reinit` |
| `api.get_health` | `GET /health` |
| `api.stream_events` | `GET /stream` |
### DB Layer (dynamic)
Every async method on `BaseRepository` is automatically wrapped by `TracedRepository` as `db.<method_name>` (e.g. `db.add_log`, `db.get_attackers`, `db.upsert_attacker`).
### Collector
| Span | Type |
|------|------|
| `collector.stream_container` | `@traced` |
| `collector.event` | inline |
### Ingester
| Span | Type |
|------|------|
| `ingester.process_record` | inline (with parent context) |
| `ingester.extract_bounty` | `@traced` |
### Profiler
| Span | Type |
|------|------|
| `profiler.incremental_update` | `@traced` |
| `profiler.update_profiles` | `@traced` |
| `profiler.process_ip` | inline |
| `profiler.timing_stats` | `@traced` |
| `profiler.classify_behavior` | `@traced` |
| `profiler.detect_tools_from_headers` | `@traced` |
| `profiler.phase_sequence` | `@traced` |
| `profiler.sniffer_rollup` | `@traced` |
| `profiler.build_behavior_record` | `@traced` |
| `profiler.behavior_summary` | inline |
### Sniffer
| Span | Type |
|------|------|
| `sniffer.worker` | `@traced` |
| `sniffer.sniff_loop` | `@traced` |
| `sniffer.tcp_syn_fingerprint` | inline |
| `sniffer.tls_client_hello` | inline |
| `sniffer.tls_server_hello` | inline |
| `sniffer.tls_certificate` | inline |
| `sniffer.parse_client_hello` | `@traced` |
| `sniffer.parse_server_hello` | `@traced` |
| `sniffer.parse_certificate` | `@traced` |
| `sniffer.ja3` | `@traced` |
| `sniffer.ja3s` | `@traced` |
| `sniffer.ja4` | `@traced` |
| `sniffer.ja4s` | `@traced` |
| `sniffer.session_resumption_info` | `@traced` |
| `sniffer.p0f_guess_os` | `@traced` |
| `sniffer.write_event` | `@traced` |
### Prober
| Span | Type |
|------|------|
| `prober.worker` | `@traced` |
| `prober.discover_attackers` | `@traced` |
| `prober.probe_cycle` | `@traced` |
| `prober.jarm_phase` | `@traced` |
| `prober.hassh_phase` | `@traced` |
| `prober.tcpfp_phase` | `@traced` |
| `prober.jarm_hash` | `@traced` |
| `prober.jarm_send_probe` | `@traced` |
| `prober.hassh_server` | `@traced` |
| `prober.hassh_ssh_connect` | `@traced` |
| `prober.tcp_fingerprint` | `@traced` |
| `prober.tcpfp_send_syn` | `@traced` |
### Engine
| Span | Type |
|------|------|
| `engine.deploy` | `@traced` |
| `engine.teardown` | `@traced` |
| `engine.compose_with_retry` | `@traced` |
### Mutator
| Span | Type |
|------|------|
| `mutator.mutate_decky` | `@traced` |
| `mutator.mutate_all` | `@traced` |
| `mutator.watch_loop` | `@traced` |
### Correlation
| Span | Type |
|------|------|
| `correlation.ingest_file` | `@traced` |
| `correlation.ingest_file.summary` | inline |
| `correlation.traversals` | `@traced` |
| `correlation.report_json` | `@traced` |
| `correlation.traversal_syslog_lines` | `@traced` |
### Logging
| Span | Type |
|------|------|
| `logging.init_file_handler` | `@traced` |
| `logging.probe_log_target` | `@traced` |
### SSE
| Span | Type |
|------|------|
| `sse.emit_logs` | inline (with span links to ingestion traces) |
## Adding New Traces
```python
from decnet.telemetry import traced as _traced, get_tracer as _get_tracer
# Decorator (preferred for entire functions)
@_traced("component.operation")
async def my_function():
...
# Inline (for sub-sections within a function)
with _get_tracer("component").start_as_current_span("component.sub_op") as span:
span.set_attribute("key", "value")
...
```
Naming convention: `component.operation` (e.g. `prober.jarm_hash`, `profiler.timing_stats`).
## Troubleshooting
| Symptom | Check |
|---------|-------|
| No traces in Jaeger | `DECNET_DEVELOPER_TRACING=true`? Jaeger running on port 4317? |
| `ImportError` on OTEL packages | Run `pip install -e ".[dev]"` (OTEL is in optional deps) |
| Partial traces (ingester orphaned) | Verify `_trace` key present in JSON log file records |
| SSE spans have no links | Confirm `trace_id`/`span_id` columns exist in `logs` table |
| Performance concern | BatchSpanProcessor adds ~1ms per span; zero overhead when disabled |

View File

@@ -1,63 +0,0 @@
# DECNET Collector
The `decnet/collector` module is responsible for the background acquisition, normalization, and filtering of logs generated by the honeypot fleet. It acts as the bridge between the transient Docker container logs and the persistent analytical database.
## Architecture
The Collector runs as a host-side worker (typically managed by the CLI or a daemon). It employs a hybrid asynchronous and multi-threaded model to handle log streaming from a dynamic number of containers without blocking the main event loop.
### Log Pipeline Flow
1. **Discovery**: Scans `decnet-state.json` to identify active Decky service containers.
2. **Streaming**: Spawns a dedicated thread for every active container to tail its `stdout` via the Docker SDK.
3. **Normalization**: Parses the raw RFC 5424 Syslog lines into structured JSON.
4. **Filtering**: Applies a rate-limiter to deduplicate high-frequency connection events.
5. **Storage**: Appends raw lines to `.log` and filtered JSON to `.json` for database ingestion.
---
## Core Components
### `worker.py`
#### `log_collector_worker(log_file: str)`
The main asynchronous entry point.
- **Initial Scan**: Identifies all running containers that match the DECNET service naming convention.
- **Event Loop**: Uses the Docker `events` API to listen for `container:start` events, allowing it to automatically pick up new Deckies that are deployed after the collector has started.
- **Task Management**: Manages a dictionary of active streaming tasks, ensuring no container is streamed more than once and cleaning up completed tasks.
---
## Log Normalization (RFC 5424)
DECNET services emit logs using a standardized RFC 5424 format with structured data. The `parse_rfc5424` function is the primary tool for extracting this information.
- **Structured Data**: Extracts parameters from the `decnet@55555` SD-ELEMENT.
- **Field Mapping**: Identifies the `attacker_ip` by scanning common source IP fields (`src_ip`, `client_ip`, etc.).
- **Consistency**: Formats timestamps into a human-readable `%Y-%m-%d %H:%M:%S` format for the analytical stream.
---
## Ingestion Rate Limiter
To prevent the local SQLite database from being overwhelmed during credential-stuffing attacks or heavy port scanning, the Collector implements a window-based rate limiter for "lifecycle" events.
- **Scope**: By default, it limits: `connect`, `disconnect`, `connection`, `accept`, and `close`.
- **Logic**: It groups events by `(attacker_ip, decky, service, event_type)`. If the same event occurs within the window, it is written to the raw `.log` file (for forensics) but **discarded** for the `.json` stream (ingestion).
- **Configuration**:
- `DECNET_COLLECTOR_RL_WINDOW_SEC`: The deduplication window size (default: 1.0s).
- `DECNET_COLLECTOR_RL_EVENT_TYPES`: Comma-separated list of event types to limit.
---
## Resilience & Operational Stability
### Inode Tracking (`_reopen_if_needed`)
Log files can be rotated by `logrotate` or manually deleted. The Collector tracks the **inode** of the log handles. If the file on disk changes (indicating rotation or deletion), the collector transparently closes and reopens the handle, ensuring no logs are lost and preventing "stale handle" errors.
### Docker SDK Integration
The Collector uses `asyncio.to_thread` to run the blocking Docker SDK `logs(stream=True)` calls. This ensures that the high-latency network calls to the Docker daemon do not starve the asynchronous event loop responsible for monitoring container starts.
### Container Identification
The Collector uses two layers of verification to ensure it only collects logs from DECNET honeypots:
1. **Name Matching**: Checks if the container name matches the `{decky}-{service}` pattern.
2. **State Verification**: Cross-references container names with the current `decnet-state.json`.

View File

@@ -1,61 +0,0 @@
# DECNET Engine (Orchestrator)
The `decnet/engine` module is the central nervous system of DECNET. It acts as the primary orchestrator, responsible for bridging high-level configuration (user-defined deckies and archetypes) with the underlying infrastructure (Docker containers, MACVLAN/IPvlan networking, and host-level configurations).
## Role in the Ecosystem
While the CLI manages user interaction and the Service Registry manages available honeypots, the **Engine** is what actually manifests these concepts into running containers on the network. It handles:
- **Network Virtualization**: Dynamically setting up MACVLAN or IPvlan L2 interfaces.
- **Container Lifecycle**: Orchestrating `docker compose` for building and running services.
- **State Persistence**: Tracking active deployments to ensure clean teardowns.
- **Unified Logging Injection**: Ensuring all honeypots share the same logging utilities.
---
## Core Components
### `deployer.py`
This is the primary implementation file for the engine logic.
#### `deploy(config: DecnetConfig, ...)`
The entry point for a deployment. It executes the following sequence:
1. **Network Setup**: Identifies the IP range required for the requested deckies and initializes the Docker MACVLAN/IPvlan network.
2. **Host Bridge**: Configures host-level routing (via `setup_host_macvlan` or `setup_host_ipvlan`) so the host can communicate with the decoys.
3. **Logging Synchronization**: Copies the `decnet_logging.py` utility into every service's build context to ensure consistent log formatting.
4. **Compose Generation**: Uses the `decnet.composer` to generate a `decnet-compose.yml` file.
5. **State Management**: Saves the current configuration to `decnet-state.json`.
6. **Orchestrated Build/Up**: Executes `docker compose up --build` with automatic retries for transient Docker daemon failures.
#### `teardown(decky_id: str | None = None)`
Handles the cleanup of DECNET resources.
- **Targeted Teardown**: If a `decky_id` is provided, it stops and removes only those specific containers.
- **Full Teardown**: If no ID is provided, it:
- Stops and removes all DECNET containers.
- Tears down host-level virtual interfaces.
- Removes the Docker MACVLAN/IPvlan network.
- Clears the internal `decnet-state.json`.
#### `status()`
Provides a real-time snapshot of the deployment.
- Queries the Docker SDK for the current status of all containers associated with the active deployment.
- Displays a `rich` table showing Decky names, IPs, Hostnames, and the health status of individual services.
---
## Internal Logic & Helpers
### Infrastructure Orchestration
The Engine relies heavily on sub-processes to interface with `docker compose`, as it provides a robust abstraction for managing complex container groups (Deckies).
- **`_compose_with_retry`**: Docker operations (especially `pull` and `build`) can fail due to network timeouts or registry issues. This helper implements exponential backoff to ensure high reliability during deployment.
- **`_compose`**: A direct wrapper for `docker compose` commands used during teardown where retries are less critical.
### The Logging Helper (`_sync_logging_helper`)
One of the most critical parts of the engine is ensuring that every honeypot service, regardless of its unique implementation, speaks the same syslog "language." The engine iterates through every active service and copies `templates/decnet_logging.py` into their respective build contexts before the build starts. This allows service containers to import the standardized logging logic at runtime.
---
## Error Handling & Resilience
The Engine is designed to handle "Permanent" vs "Transient" failures. It identifies errors such as `manifest unknown` or `repository does not exist` as terminal and will abort immediately, while others (connection resets, daemon timeouts) trigger a retry cycle.
## State Management
The Engine maintains a `decnet-state.json` file. This file acts as the source of truth for what is currently "on the wire." Without this state, a proper `teardown` would be impossible, as the engine wouldn't know which virtual interfaces were created on the host NIC.

View File

@@ -1,58 +0,0 @@
# DECNET Domain Models
> [!IMPORTANT]
> **DEVELOPMENT DISCLAIMER**: DECNET is currently in active development. The models defined in `decnet/models.py` are subject to significant changes as the framework evolves.
## Overview
The `decnet/models.py` file serves as the centralized repository for all **Domain Models** used throughout the project. These are implemented using Pydantic v2 and ensure that the core business logic remains decoupled from the specific implementation details of the database (SQLAlchemy/SQLite) or the web layer (FastAPI).
---
## Model Hierarchy
DECNET categorizes its models into two primary functional groups: **INI Specifications** and **Runtime Configurations**.
### 1. INI Specifications (Input Validation)
These models are designed to represent the structure of a `decnet.ini` file. They are primarily consumed by the `ini_loader.py` during the parsing of user-provided configuration files.
- **`IniConfig`**: The root model for a full deployment specification. It includes global settings like `subnet`, `gateway`, and `interface`, and contains a list of `DeckySpec` objects.
- **`DeckySpec`**: A high-level description of a machine. It contains optional fields that the user *may* provide in an INI file (e.g., `ip`, `archetype`, `services`).
- **`CustomServiceSpec`**: Defines external "Bring-Your-Own" services using Docker images and custom execution commands.
### 2. Runtime Configurations (Operational State)
These models represent the **active, fully resolved state** of the deployment. Unlike the specifications, these models require all fields to be populated and valid.
- **`DecnetConfig`**: The operational root of a deployment. It includes the resolved network settings and the list of active `DeckyConfig` objects. It is used by the **Engine** for orchestration and is persisted in `decnet-state.json`.
- **`DeckyConfig`**: A fully materialized decoy configuration. It includes generated hostnames, resolved distro images, and specific IP addresses.
---
## The Fleet Transformer (`fleet.py`)
The connection between the **Specifications** and the **Runtime Configurations** is handled by `decnet/fleet.py`.
The function `build_deckies_from_ini` takes an `IniConfig` as input and performs the following "up-conversion" logic:
- **IP Allocation**: Auto-allocates free IPs from the subnet for any deckies missing an explicit IP in the INI.
- **Service Resolution**: Validates that all requested services exist in the registry and assigns defaults from archetypes if needed.
- **Environment Inheritance**: Inherits settings like rotation intervals (`mutate_interval`) from the global INI context down to individual deckies.
---
## Structural Validation: `IniContent`
To ensure that saved deployments in the database or provided by the API remain structurally sound, DECNET uses a specialized `IniContent` type.
- **`validate_ini_string`**: A pre-validator that uses Python's native `configparser`. It ensures that the content is a valid INI string, does not exceed 512KB, and contains at least one section.
- **Standardized Errors**: It raises specifically formatted `ValueError` exceptions that are captured by both the CLI and the Web UI to provide clear feedback to the user.
---
## Key Consumer Modules
| Module | Usage |
| :--- | :--- |
| **`decnet/ini_loader.py`** | Uses `IniConfig` and `DeckySpec` to parse raw `.ini` files into structured objects. |
| **`decnet/fleet.py`** | Transforms `IniConfig` specs into `DeckyConfig` operational models. |
| **`decnet/config.py`** | Uses `DecnetConfig` and `DeckyConfig` to manage the lifecycle of `decnet-state.json`. |
| **`decnet/web/db/models.py`** | Utilizes `IniContent` to enforce structural validity on INI strings stored in the database. |

View File

@@ -1,134 +0,0 @@
# DECNET Web & Database Models: Architectural Deep Dive
> [!IMPORTANT]
> **DEVELOPMENT DISCLAIMER**: DECNET is currently in active development. The storage schemas and API signatures defined in `decnet/web/db/models.py` are subject to radical change as the framework's analytical capabilities and distributed features expand.
## 1. Introduction & Philosophy
The `decnet/web/db/models.py` file represents the structural backbone of the DECNET web interface and its underlying analytical engine. It serves a dual purpose that is central to the project's architecture:
1. **Unified Source of Truth**: By utilizing **SQLModel**, DECNET collapses the traditional barrier between Pydantic data validation and SQLAlchemy ORM mapping. This allows a single class definition to act as both a database table and an API data object, drastically reducing the "boilerplate" associated with traditional web-database pipelines.
2. **Analytical Scalability**: The models are designed to scale from small-scale local deployments using **SQLite** to large-scale, enterprise-ready environments backed by **MySQL**. This is achieved through clever usage of SQLAlchemy "Variants" and abstraction layers for large text blobs.
---
## 2. The Database Layer (SQLModel Entities)
These models define the physical tables within the DECNET infrastructure. Every class marked with `table=True` is interpreted by the repository layer to generate the corresponding DDL (Data Definition Language) for the target database.
### 2.1 Identity & Security: The `User` Entity
The `User` model handles dashboard access control and basic identity management.
* `uuid`: A unique string identifier. While integers are often used for IDs, DECNET uses strings to support potential future transitions to UUIDs without schema breakage.
* `username`: The primary login handle. It is both `unique` and `indexed` for rapid authentication lookups.
* `password_hash`: Stores the Argon2 or bcrypt hash. Length constraints in various routers ensure that raw passwords never exceed 72 characters, preventing "Long Password Denial of Service" attacks on various hashing algorithms.
* `role`: A simple string-based permission field (e.g., `admin`, `viewer`).
* `must_change_password`: A boolean flag used for fresh deployments or manual administrative resets, forcing the user to rotate their credentials upon their first authenticated session.
### 2.2 Intelligence & Attribution: `Attacker` and `AttackerBehavior`
These two tables form the core of DECNET's "Attacker Profiling" system. They are split into two tables to maintain "Narrow vs. Wide" performance characteristics.
#### The `Attacker` Entity (Broad Analytics)
The `Attacker` table stores the "primary" record for every unique IP discovered by the honeypot fleet.
* `ip`: The source IP address. This is the primary key and is heavily indexed.
* `first_seen` / `last_seen`: Tracking the lifecycle of an attacker's engagement with the network.
* `event_count` / `service_count` / `decky_count`: Aggregated counters used by the stats dashboard to visualize the magnitude of an engagement.
* `services` / `deckies`: JSON-serialized lists of every service and machine reached by the attacker. Using `_BIG_TEXT` here allows these lists to grow significantly during long-term campaigns.
* `traversal_path`: A string representation (e.g., `omega → epsilon → zulu`) that helps analysts visualize lateral movement attempts recorded by the correlation engine.
#### The `AttackerBehavior` Entity (Granular Analytics)
This "Wide" table stores behavioral signatures. It is separated from the main `Attacker` record so that high-frequency updates to timing stats or sniffer-derived packet signatures don't lock the primary attribution rows.
* `os_guess`: Derived from the `os_fingerprint` and `sniffer` engines, providing an estimate of the attacker's operating system based on TCP/IP stack nuances.
* `tcp_fingerprint`: A JSON blob storing the raw TCP signature (Window size, MSS, Option sequence).
* `behavior_class`: A classification (e.g., `beaconing`, `interactive`, `brute_force`) derived from log inter-arrival timing (IAT).
* `timing_stats`: Stores a JSON dictionary of mean/median/stdev for event timing, used to detect automated tooling.
### 2.3 Telemetry: `Log` and `Bounty`
These tables store the "raw" data generated by the honeypots.
* **`Log` Table**: The primary event sink. Every line from the collector ends up here.
* `event_type`: The MSGID from the RFC 5424 header (e.g., `connect`, `exploit`).
* `raw_line`: The full, un-parsed syslog string for forensic verification.
* `fields`: A JSON blob containing the structured data (SD-ELEMENTS) extracted during normalization.
* **`Bounty` Table**: Specifically for high-value events. When a service detects "Gold" (like a plain-text password or a known PoC payload), it is mirrored here for rapid analyst review.
### 2.4 System State: The `State` Entity
The `State` table acts as the orchestrator's brain. It stores the `decnet-state.json` content within the database when the system is integrated with the web layer.
* `key`: The configuration key (e.g., `global_config`, `active_deployment`).
* `value`: A `MEDIUMTEXT` JSON blob. This is potentially its largest field, storing the entire resolved configuration of every running Decky.
---
## 3. The API Layer (Pydantic DTOs)
These models define how data moves across the wire between the FastAPI backend and the frontend.
### 3.1 Authentication Pipeline
* `LoginRequest`: Validates incoming credentials before passing them to the security middleware.
* `Token`: The standard OAuth2 bearer token response, enriched with the `must_change_password` hint.
* `ChangePasswordRequest`: Ensures the old password is provided and the new one meets the project's security constraints.
### 3.2 Reporting & Pagination
DECNET uses a standardized "Envelope" pattern for broad analytical responses (`LogsResponse`, `AttackersResponse`, `BountyResponse`).
* `total`: The total count of matching records in the database (ignoring filters).
* `limit` / `offset`: The specific slice of data returned, supporting "Infinite Scroll" or traditional pagination in the UI.
* `data`: A list of dictionaries. By using `dict[str, Any]` here, the API remains flexible with SQLModel's dynamic attribute loading.
### 3.3 System Administration
* **`DeployIniRequest`**: The most critical input model. It takes `ini_content` as a validated string. By using the `IniContent` annotated type, the API rejects malformed deployments before they ever touch the fleet builder.
* **`MutateIntervalRequest`**: Uses a strict REGEX pattern (`^[1-9]\d*[mdMyY]$`) to ensure intervals like `30m` (30 minutes) or `2d` (2 days) are valid before being applied to the orchestrator.
---
## 4. Technical Foundations
### 4.1 Cross-DB Compatibility Logic
The project uses a custom variant system to handle the discrepancies between SQLite (which has simplified typing) and MySQL (which has strict size constraints).
```python
_BIG_TEXT = Text().with_variant(MEDIUMTEXT(), "mysql")
```
This abstraction ensures that fields like `Attacker.services` (which can grow to thousands of items) are stored as `MEDIUMTEXT` (16 MiB) on MySQL, whereas standard SQLAlchemy `Text` (often 64 KiB on MySQL) would silently truncate the data, leading to analytical loss.
### 4.2 High-Fidelity Normalization
Data arriving from distributed honeypots is often "dirty." The models include custom pre-validators like `_normalize_null`.
* **Null Coalescing**: Services often emit logging values as `"null"` or `"undefined"` strings. The `NullableString` type automatically converts these "noise" strings into actual Python `None` types during ingestion.
* **Timestamp Integrity**: `NullableDatetime` ensures that various ISO formats or epoch timestamps provided by different service containers are normalized into standard UTC datetime objects.
---
## 5. Integration Case Studies (Deep Analysis)
To understand how these models function, we must examine their lifecycle across the web stack.
### 5.1 The Repository Layer (`decnet/web/db/sqlmodel_repo.py`)
The repository is the primary consumer of the "Entities." It utilizes the metadata generated by SQLModel to:
1. **Generate DDL**: On startup, the repository calls `SQLModel.metadata.create_all()`. This takes every `table=True` class and translates it into `CREATE TABLE` statements tailored to the active engine (SQLite or MySQL).
2. **Translate DTOs**: When the repository fetches an `Attacker` from the DB, SQLModel automatically populates the Pydantic-style attributes, allowing the repository to return objects that are immediately serializeable by the routers.
### 5.2 The Dashboard Routers
Specific endpoints rely on these models for boundary safety:
* **`api_deploy_deckies.py`**: Uses `DeployIniRequest`. This ensures that even if a user tries to POST a massive binary file instead of an INI, the Pydantic layer (powered by `decnet.models.validate_ini_string`) will intercept and reject the request with a `422 Unprocessable Entity` error before it reaches the orchestrator.
* **`api_get_stats.py`**: Uses `StatsResponse`. This model serves as a "rollup" that aggregates data from the `Log`, `Attacker`, and `State` tables into a single unified JSON object for the dashboard's "At a Glance" view.
* **`api_get_health.py`**: Uses `HealthResponse`. This model provides a nested view of the system, where each sub-component (Engine, Collector, DB) is represented as a `ComponentHealth` object, allowing the UI to show granular "Success" or "Failure" states.
---
## 6. Futureproofing & Guidelines
As the project grows, the following habits must be maintained:
1. **Keep the Row Narrow**: Always separate behavioral data that updates frequently into auxiliary tables like `AttackerBehavior`.
2. **Use Variants**: Never use standard `String` or `Text` for JSON blobs; always use `_BIG_TEXT` to respect MySQL's storage limitations.
3. **Validate at the Boundary**: Ensure every new API request model uses Pydantic's strict typing to prevent malicious payloads from reaching the database layer.

View File

@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project] [project]
name = "decnet" name = "decnet"
version = "0.2.0" version = "0.2"
description = "Deception network: deploy honeypot deckies that appear as real LAN hosts" description = "Deception network: deploy honeypot deckies that appear as real LAN hosts"
requires-python = ">=3.11" requires-python = ">=3.11"
dependencies = [ dependencies = [
@@ -16,44 +16,34 @@ dependencies = [
"fastapi>=0.110.0", "fastapi>=0.110.0",
"uvicorn>=0.29.0", "uvicorn>=0.29.0",
"aiosqlite>=0.20.0", "aiosqlite>=0.20.0",
"aiomysql>=0.2.0",
"PyJWT>=2.8.0", "PyJWT>=2.8.0",
"bcrypt>=4.1.0", "bcrypt>=4.1.0",
"psutil>=5.9.0", "psutil>=5.9.0",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",
"sqlmodel>=0.0.16", "sqlmodel>=0.0.16",
"scapy>=2.6.1",
] ]
[project.optional-dependencies] [project.optional-dependencies]
tracing = [
"opentelemetry-api>=1.20.0",
"opentelemetry-sdk>=1.20.0",
"opentelemetry-exporter-otlp>=1.20.0",
"opentelemetry-instrumentation-fastapi>=0.41b0",
]
dev = [ dev = [
"decnet[tracing]", "pytest>=8.0",
"pytest>=9.0.3", "ruff>=0.4",
"ruff>=0.15.10", "bandit>=1.7",
"bandit>=1.9.4", "pip-audit>=2.0",
"pip>=26.0", "httpx>=0.27.0",
"pip-audit>=2.10.0", "hypothesis>=6.0",
"httpx>=0.28.1", "pytest-cov>=7.0",
"hypothesis>=6.151.14", "pytest-asyncio>=1.0",
"pytest-cov>=7.1.0", "freezegun>=1.5",
"pytest-asyncio>=1.3.0", "schemathesis>=4.0",
"freezegun>=1.5.5",
"schemathesis>=4.15.1",
"pytest-xdist>=3.8.0", "pytest-xdist>=3.8.0",
"flask>=3.1.3", "flask>=3.0",
"twisted>=25.5.0", "twisted>=24.0",
"requests>=2.33.1", "requests>=2.32",
"redis>=7.4.0", "redis>=5.0",
"pymysql>=1.1.2", "pymysql>=1.1",
"psycopg2-binary>=2.9.11", "psycopg2-binary>=2.9",
"paho-mqtt>=2.1.0", "paho-mqtt>=2.0",
"pymongo>=4.16.0", "pymongo>=4.0",
] ]
[project.scripts] [project.scripts]
@@ -62,7 +52,7 @@ decnet = "decnet.cli:app"
[tool.pytest.ini_options] [tool.pytest.ini_options]
asyncio_mode = "auto" asyncio_mode = "auto"
asyncio_debug = "true" asyncio_debug = "true"
addopts = "-m 'not fuzz and not live' -v -q -x -n logical --dist loadscope" addopts = "-m 'not fuzz and not live' -v -q -x -n logical"
markers = [ markers = [
"fuzz: hypothesis-based fuzz tests (slow, run with -m fuzz or -m '' for all)", "fuzz: hypothesis-based fuzz tests (slow, run with -m fuzz or -m '' for all)",
"live: live subprocess service tests (run with -m live)", "live: live subprocess service tests (run with -m live)",

View File

@@ -1,6 +1,3 @@
[[project]]
title = "DECNET API"
continue-on-failure = true
request-timeout = 5.0 request-timeout = 5.0
[[operations]] [[operations]]

View File

@@ -62,6 +62,7 @@ _CONTAINERS = [
def _log(event_type: str, severity: int = 6, **kwargs) -> None: def _log(event_type: str, severity: int = 6, **kwargs) -> None:
line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs) line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs)
print(line, flush=True)
write_syslog_file(line) write_syslog_file(line)
forward_syslog(line, LOG_TARGET) forward_syslog(line, LOG_TARGET)

View File

@@ -40,6 +40,7 @@ _ROOT_RESPONSE = {
def _log(event_type: str, severity: int = 6, **kwargs) -> None: def _log(event_type: str, severity: int = 6, **kwargs) -> None:
line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs) line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs)
print(line, flush=True)
write_syslog_file(line) write_syslog_file(line)
forward_syslog(line, LOG_TARGET) forward_syslog(line, LOG_TARGET)

View File

@@ -22,6 +22,7 @@ BANNER = os.environ.get("FTP_BANNER", "220 (vsFTPd 3.0.3)")
def _log(event_type: str, severity: int = 6, **kwargs) -> None: def _log(event_type: str, severity: int = 6, **kwargs) -> None:
line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs) line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs)
print(line, flush=True)
write_syslog_file(line) write_syslog_file(line)
forward_syslog(line, LOG_TARGET) forward_syslog(line, LOG_TARGET)

View File

@@ -68,6 +68,7 @@ def _fix_server_header(response):
def _log(event_type: str, severity: int = 6, **kwargs) -> None: def _log(event_type: str, severity: int = 6, **kwargs) -> None:
line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs) line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs)
print(line, flush=True)
write_syslog_file(line) write_syslog_file(line)
forward_syslog(line, LOG_TARGET) forward_syslog(line, LOG_TARGET)
@@ -79,7 +80,7 @@ def log_request():
method=request.method, method=request.method,
path=request.path, path=request.path,
remote_addr=request.remote_addr, remote_addr=request.remote_addr,
headers=json.dumps(dict(request.headers)), headers=dict(request.headers),
body=request.get_data(as_text=True)[:512], body=request.get_data(as_text=True)[:512],
) )

View File

@@ -1,29 +0,0 @@
ARG BASE_IMAGE=debian:bookworm-slim
FROM ${BASE_IMAGE}
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 python3-pip openssl \
&& rm -rf /var/lib/apt/lists/*
ENV PIP_BREAK_SYSTEM_PACKAGES=1
RUN pip3 install --no-cache-dir flask jinja2
COPY decnet_logging.py /opt/decnet_logging.py
COPY server.py /opt/server.py
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
RUN mkdir -p /opt/tls
EXPOSE 443
RUN useradd -r -s /bin/false -d /opt decnet \
&& chown -R decnet:decnet /opt/tls \
&& apt-get update && apt-get install -y --no-install-recommends libcap2-bin \
&& rm -rf /var/lib/apt/lists/* \
&& (find /usr/bin/ -maxdepth 1 -name 'python3*' -type f -exec setcap 'cap_net_bind_service+eip' {} \; 2>/dev/null || true)
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD kill -0 1 || exit 1
USER decnet
ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -1,89 +0,0 @@
#!/usr/bin/env python3
"""
Shared RFC 5424 syslog helper for DECNET service templates.
Services call syslog_line() to format an RFC 5424 message, then
write_syslog_file() to emit it to stdout — Docker captures it, and the
host-side collector streams it into the log file.
RFC 5424 structure:
<PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG
Facility: local0 (16), PEN for SD element ID: decnet@55555
"""
from datetime import datetime, timezone
from typing import Any
# ─── Constants ────────────────────────────────────────────────────────────────
_FACILITY_LOCAL0 = 16
_SD_ID = "decnet@55555"
_NILVALUE = "-"
SEVERITY_EMERG = 0
SEVERITY_ALERT = 1
SEVERITY_CRIT = 2
SEVERITY_ERROR = 3
SEVERITY_WARNING = 4
SEVERITY_NOTICE = 5
SEVERITY_INFO = 6
SEVERITY_DEBUG = 7
_MAX_HOSTNAME = 255
_MAX_APPNAME = 48
_MAX_MSGID = 32
# ─── Formatter ────────────────────────────────────────────────────────────────
def _sd_escape(value: str) -> str:
"""Escape SD-PARAM-VALUE per RFC 5424 §6.3.3."""
return value.replace("\\", "\\\\").replace('"', '\\"').replace("]", "\\]")
def _sd_element(fields: dict[str, Any]) -> str:
if not fields:
return _NILVALUE
params = " ".join(f'{k}="{_sd_escape(str(v))}"' for k, v in fields.items())
return f"[{_SD_ID} {params}]"
def syslog_line(
service: str,
hostname: str,
event_type: str,
severity: int = SEVERITY_INFO,
timestamp: datetime | None = None,
msg: str | None = None,
**fields: Any,
) -> str:
"""
Return a single RFC 5424-compliant syslog line (no trailing newline).
Args:
service: APP-NAME (e.g. "http", "mysql")
hostname: HOSTNAME (decky node name)
event_type: MSGID (e.g. "request", "login_attempt")
severity: Syslog severity integer (default: INFO=6)
timestamp: UTC datetime; defaults to now
msg: Optional free-text MSG
**fields: Encoded as structured data params
"""
pri = f"<{_FACILITY_LOCAL0 * 8 + severity}>"
ts = (timestamp or datetime.now(timezone.utc)).isoformat()
host = (hostname or _NILVALUE)[:_MAX_HOSTNAME]
appname = (service or _NILVALUE)[:_MAX_APPNAME]
msgid = (event_type or _NILVALUE)[:_MAX_MSGID]
sd = _sd_element(fields)
message = f" {msg}" if msg else ""
return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}"
def write_syslog_file(line: str) -> None:
"""Emit a syslog line to stdout for Docker log capture."""
print(line, flush=True)
def forward_syslog(line: str, log_target: str) -> None:
"""No-op stub. TCP forwarding is now handled by rsyslog, not by service containers."""
pass

View File

@@ -1,18 +0,0 @@
#!/bin/bash
set -e
TLS_DIR="/opt/tls"
CERT="${TLS_CERT:-$TLS_DIR/cert.pem}"
KEY="${TLS_KEY:-$TLS_DIR/key.pem}"
# Generate a self-signed certificate if none exists
if [ ! -f "$CERT" ] || [ ! -f "$KEY" ]; then
mkdir -p "$TLS_DIR"
CN="${TLS_CN:-${NODE_NAME:-localhost}}"
openssl req -x509 -newkey rsa:2048 -nodes \
-keyout "$KEY" -out "$CERT" \
-days 3650 -subj "/CN=$CN" \
2>/dev/null
fi
exec python3 /opt/server.py

View File

@@ -1,136 +0,0 @@
#!/usr/bin/env python3
"""
HTTPS service emulator using Flask + TLS.
Identical to the HTTP honeypot but wrapped in TLS. Accepts all requests,
logs every detail (method, path, headers, body, TLS info), and responds
with configurable pages. Forwards events as JSON to LOG_TARGET if set.
"""
import json
import logging
import os
import ssl
from pathlib import Path
from flask import Flask, request, send_from_directory
from werkzeug.serving import make_server, WSGIRequestHandler
from decnet_logging import syslog_line, write_syslog_file, forward_syslog
logging.getLogger("werkzeug").setLevel(logging.ERROR)
NODE_NAME = os.environ.get("NODE_NAME", "webserver")
SERVICE_NAME = "https"
LOG_TARGET = os.environ.get("LOG_TARGET", "")
PORT = int(os.environ.get("PORT", "443"))
SERVER_HEADER = os.environ.get("SERVER_HEADER", "Apache/2.4.54 (Debian)")
RESPONSE_CODE = int(os.environ.get("RESPONSE_CODE", "403"))
FAKE_APP = os.environ.get("FAKE_APP", "")
EXTRA_HEADERS = json.loads(os.environ.get("EXTRA_HEADERS", "{}"))
CUSTOM_BODY = os.environ.get("CUSTOM_BODY", "")
FILES_DIR = os.environ.get("FILES_DIR", "")
TLS_CERT = os.environ.get("TLS_CERT", "/opt/tls/cert.pem")
TLS_KEY = os.environ.get("TLS_KEY", "/opt/tls/key.pem")
_FAKE_APP_BODIES: dict[str, str] = {
"apache_default": (
"<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n"
"<html><head><title>Apache2 Debian Default Page</title></head>\n"
"<body><h1>Apache2 Debian Default Page</h1>\n"
"<p>It works!</p></body></html>"
),
"nginx_default": (
"<!DOCTYPE html><html><head><title>Welcome to nginx!</title></head>\n"
"<body><h1>Welcome to nginx!</h1>\n"
"<p>If you see this page, the nginx web server is successfully installed.</p>\n"
"</body></html>"
),
"wordpress": (
"<!DOCTYPE html><html><head><title>WordPress &rsaquo; Error</title></head>\n"
"<body id=\"error-page\"><div class=\"wp-die-message\">\n"
"<h1>Error establishing a database connection</h1></div></body></html>"
),
"phpmyadmin": (
"<!DOCTYPE html><html><head><title>phpMyAdmin</title></head>\n"
"<body><form method=\"post\" action=\"index.php\">\n"
"<input type=\"text\" name=\"pma_username\" />\n"
"<input type=\"password\" name=\"pma_password\" />\n"
"<input type=\"submit\" value=\"Go\" /></form></body></html>"
),
"iis_default": (
"<!DOCTYPE html><html><head><title>IIS Windows Server</title></head>\n"
"<body><h1>IIS Windows Server</h1>\n"
"<p>Welcome to Internet Information Services</p></body></html>"
),
}
app = Flask(__name__)
@app.after_request
def _fix_server_header(response):
response.headers["Server"] = SERVER_HEADER
return response
def _log(event_type: str, severity: int = 6, **kwargs) -> None:
line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs)
write_syslog_file(line)
forward_syslog(line, LOG_TARGET)
@app.before_request
def log_request():
_log(
"request",
method=request.method,
path=request.path,
remote_addr=request.remote_addr,
headers=dict(request.headers),
body=request.get_data(as_text=True)[:512],
)
@app.route("/", defaults={"path": ""})
@app.route("/<path:path>", methods=["GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS", "HEAD"])
def catch_all(path):
# Serve static files directory if configured
if FILES_DIR and path:
files_path = Path(FILES_DIR) / path
if files_path.is_file():
return send_from_directory(FILES_DIR, path)
# Select response body: custom > fake_app preset > default 403
if CUSTOM_BODY:
body = CUSTOM_BODY
elif FAKE_APP and FAKE_APP in _FAKE_APP_BODIES:
body = _FAKE_APP_BODIES[FAKE_APP]
else:
body = (
"<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n"
"<html><head>\n"
"<title>403 Forbidden</title>\n"
"</head><body>\n"
"<h1>Forbidden</h1>\n"
"<p>You don't have permission to access this resource.</p>\n"
"<hr>\n"
f"<address>{SERVER_HEADER} Server at {NODE_NAME} Port 443</address>\n"
"</body></html>\n"
)
headers = {"Content-Type": "text/html", **EXTRA_HEADERS}
return body, RESPONSE_CODE, headers
class _SilentHandler(WSGIRequestHandler):
"""Suppress Werkzeug's Server header so Flask's after_request is the sole source."""
def version_string(self) -> str:
return ""
if __name__ == "__main__":
_log("startup", msg=f"HTTPS server starting as {NODE_NAME}")
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
ctx.load_cert_chain(TLS_CERT, TLS_KEY)
srv = make_server("0.0.0.0", PORT, app, request_handler=_SilentHandler) # nosec B104
srv.socket = ctx.wrap_socket(srv.socket, server_side=True)
srv.serve_forever()

View File

@@ -236,6 +236,7 @@ _MAILBOXES = ["INBOX", "Sent", "Drafts", "Archive"]
def _log(event_type: str, severity: int = 6, **kwargs) -> None: def _log(event_type: str, severity: int = 6, **kwargs) -> None:
line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs) line = syslog_line(SERVICE_NAME, NODE_NAME, event_type, severity, **kwargs)
print(line, flush=True)
write_syslog_file(line) write_syslog_file(line)
forward_syslog(line, LOG_TARGET) forward_syslog(line, LOG_TARGET)

Some files were not shown because too many files have changed in this diff Show More