17 Commits

Author SHA1 Message Date
DECNET CI
499836c9e4 chore: auto-release v0.2 [skip ci] 2026-04-13 11:50:02 +00:00
bb9c782c41 Merge pull request 'tofix/merge-testing-to-main' (#6) from tofix/merge-testing-to-main into main
Some checks failed
Release / Auto-tag release (push) Successful in 16s
Release / Build, scan & push conpot (push) Failing after 4m22s
Release / Build, scan & push elasticsearch (push) Failing after 4m37s
Release / Build, scan & push llmnr (push) Failing after 4m32s
Release / Build, scan & push mongodb (push) Failing after 4m35s
Release / Build, scan & push ldap (push) Failing after 4m44s
Release / Build, scan & push docker_api (push) Failing after 4m57s
Release / Build, scan & push imap (push) Failing after 4m50s
Release / Build, scan & push http (push) Failing after 4m59s
Release / Build, scan & push mssql (push) Failing after 4m28s
Release / Build, scan & push mqtt (push) Failing after 4m38s
Release / Build, scan & push ftp (push) Failing after 5m8s
Release / Build, scan & push k8s (push) Failing after 5m3s
Release / Build, scan & push mysql (push) Failing after 1m56s
Release / Build, scan & push redis (push) Has started running
Release / Build, scan & push rdp (push) Has been cancelled
Release / Build, scan & push pop3 (push) Has been cancelled
Release / Build, scan & push postgres (push) Has been cancelled
Release / Build, scan & push sip (push) Has started running
Release / Build, scan & push smb (push) Has started running
Release / Build, scan & push smtp (push) Has started running
Release / Build, scan & push snmp (push) Has started running
Release / Build, scan & push ssh (push) Has started running
Release / Build, scan & push telnet (push) Has started running
Release / Build, scan & push tftp (push) Has started running
Release / Build, scan & push vnc (push) Has started running
Reviewed-on: #6
2026-04-13 13:49:47 +02:00
597854cc06 Merge branch 'merge/testing-to-main' into tofix/merge-testing-to-main
Some checks failed
PR Gate / Lint (ruff) (pull_request) Successful in 17s
PR Gate / SAST (bandit) (pull_request) Successful in 23s
PR Gate / Dependency audit (pip-audit) (pull_request) Successful in 36s
PR Gate / Test (pytest) (3.12) (pull_request) Failing after 1m0s
PR Gate / Test (pytest) (3.11) (pull_request) Failing after 1m10s
2026-04-13 07:48:43 -04:00
3b4b0a1016 merge: resolve conflicts between testing and main (remove tracked settings, fix pyproject deps) 2026-04-13 07:48:37 -04:00
DECNET CI
8ad3350d51 ci: auto-merge dev → testing [skip ci] 2026-04-13 05:55:46 +00:00
23ec470988 Merge pull request 'fix/merge-testing-to-main' (#4) from fix/merge-testing-to-main into main
Some checks failed
Release / Auto-tag release (push) Failing after 8s
Release / Build, scan & push cowrie (push) Has been skipped
Release / Build, scan & push docker_api (push) Has been skipped
Release / Build, scan & push elasticsearch (push) Has been skipped
Release / Build, scan & push ftp (push) Has been skipped
Release / Build, scan & push http (push) Has been skipped
Release / Build, scan & push imap (push) Has been skipped
Release / Build, scan & push k8s (push) Has been skipped
Release / Build, scan & push ldap (push) Has been skipped
Release / Build, scan & push llmnr (push) Has been skipped
Release / Build, scan & push mongodb (push) Has been skipped
Release / Build, scan & push mqtt (push) Has been skipped
Release / Build, scan & push mssql (push) Has been skipped
Release / Build, scan & push mysql (push) Has been skipped
Release / Build, scan & push pop3 (push) Has been skipped
Release / Build, scan & push postgres (push) Has been skipped
Release / Build, scan & push rdp (push) Has been skipped
Release / Build, scan & push real_ssh (push) Has been skipped
Release / Build, scan & push redis (push) Has been skipped
Release / Build, scan & push sip (push) Has been skipped
Release / Build, scan & push smb (push) Has been skipped
Release / Build, scan & push smtp (push) Has been skipped
Release / Build, scan & push snmp (push) Has been skipped
Release / Build, scan & push tftp (push) Has been skipped
Release / Build, scan & push vnc (push) Has been skipped
Reviewed-on: #4
2026-04-12 10:10:19 +02:00
4064e19af1 merge: resolve conflicts between testing and main
Some checks failed
PR Gate / Lint (ruff) (pull_request) Failing after 11s
PR Gate / Test (pytest) (3.11) (pull_request) Failing after 10s
PR Gate / Test (pytest) (3.12) (pull_request) Failing after 10s
PR Gate / SAST (bandit) (pull_request) Successful in 12s
PR Gate / Dependency audit (pip-audit) (pull_request) Failing after 13s
2026-04-12 04:09:17 -04:00
DECNET CI
ac4e5e1570 ci: auto-merge dev → testing
All checks were successful
CI / Lint (ruff) (push) Successful in 11s
CI / Test (pytest) (3.11) (push) Successful in 1m9s
CI / Test (pytest) (3.12) (push) Successful in 1m14s
CI / SAST (bandit) (push) Successful in 12s
CI / Dependency audit (pip-audit) (push) Successful in 21s
CI / Merge dev → testing (push) Has been skipped
CI / Open PR to main (push) Successful in 6s
PR Gate / Lint (ruff) (pull_request) Successful in 11s
PR Gate / Test (pytest) (3.11) (pull_request) Successful in 1m13s
PR Gate / Test (pytest) (3.12) (pull_request) Successful in 1m12s
PR Gate / SAST (bandit) (pull_request) Successful in 13s
PR Gate / Dependency audit (pip-audit) (pull_request) Successful in 21s
2026-04-12 07:53:07 +00:00
eb40be2161 chore: split dev and normal dependencies in pyproject.toml 2026-04-08 00:09:15 -04:00
0927d9e1e8 Modified: DEVELOPMENT.md 2026-04-06 12:03:36 -04:00
9c81fb4739 revert f64c251a9e
revert revert f8a9f8fc64

revert Added: modified notes. Finished CI/CD pipeline.
2026-04-06 18:02:28 +02:00
e4171789a8 Added: documentation about the deaddeck archetype and how to run it. 2026-04-06 11:51:24 -04:00
f64c251a9e revert f8a9f8fc64
revert Added: modified notes. Finished CI/CD pipeline.
2026-04-06 17:15:32 +02:00
c56c9fe667 Merge pull request 'Auto PR: dev → main' (#2) from dev into main
Some checks failed
Release / Auto-tag release (push) Successful in 14s
Release / Build, scan & push cowrie (push) Failing after 41s
Release / Build, scan & push docker_api (push) Failing after 30s
Release / Build, scan & push elasticsearch (push) Failing after 30s
Release / Build, scan & push ftp (push) Failing after 32s
Release / Build, scan & push http (push) Failing after 32s
Release / Build, scan & push imap (push) Failing after 31s
Release / Build, scan & push k8s (push) Failing after 32s
Release / Build, scan & push ldap (push) Failing after 30s
Release / Build, scan & push llmnr (push) Failing after 33s
Release / Build, scan & push mongodb (push) Failing after 32s
Release / Build, scan & push mqtt (push) Failing after 33s
Release / Build, scan & push mssql (push) Failing after 31s
Release / Build, scan & push mysql (push) Failing after 33s
Release / Build, scan & push pop3 (push) Failing after 33s
Release / Build, scan & push postgres (push) Failing after 32s
Release / Build, scan & push rdp (push) Failing after 32s
Release / Build, scan & push real_ssh (push) Failing after 33s
Release / Build, scan & push redis (push) Failing after 33s
Release / Build, scan & push sip (push) Failing after 33s
Release / Build, scan & push smb (push) Failing after 31s
Release / Build, scan & push smtp (push) Failing after 31s
Release / Build, scan & push snmp (push) Failing after 31s
Release / Build, scan & push tftp (push) Failing after 31s
Release / Build, scan & push vnc (push) Failing after 33s
Reviewed-on: #2
2026-04-06 17:11:54 +02:00
897f498bcd Merge dev into main: resolve conflicts, keep tests out of main
Some checks failed
Release / Auto-tag release (push) Successful in 14s
Release / Build, scan & push cowrie (push) Failing after 6m9s
Release / Build, scan & push docker_api (push) Failing after 31s
Release / Build, scan & push elasticsearch (push) Failing after 30s
Release / Build, scan & push ftp (push) Failing after 30s
Release / Build, scan & push http (push) Failing after 33s
Release / Build, scan & push imap (push) Failing after 30s
Release / Build, scan & push k8s (push) Failing after 30s
Release / Build, scan & push ldap (push) Failing after 33s
Release / Build, scan & push llmnr (push) Failing after 29s
Release / Build, scan & push mongodb (push) Failing after 30s
Release / Build, scan & push mqtt (push) Failing after 30s
Release / Build, scan & push mssql (push) Failing after 30s
Release / Build, scan & push mysql (push) Failing after 30s
Release / Build, scan & push pop3 (push) Failing after 32s
Release / Build, scan & push postgres (push) Failing after 29s
Release / Build, scan & push rdp (push) Failing after 29s
Release / Build, scan & push real_ssh (push) Failing after 31s
Release / Build, scan & push redis (push) Failing after 29s
Release / Build, scan & push sip (push) Failing after 30s
Release / Build, scan & push smb (push) Failing after 32s
Release / Build, scan & push smtp (push) Failing after 31s
Release / Build, scan & push snmp (push) Failing after 29s
Release / Build, scan & push tftp (push) Failing after 29s
Release / Build, scan & push vnc (push) Failing after 30s
2026-04-04 18:00:17 -03:00
92e06cb193 Add release workflow for auto-tagging and Docker image builds
Some checks failed
Release / Auto-tag release (push) Failing after 3s
Release / Build & push cowrie (push) Has been skipped
Release / Build & push docker_api (push) Has been skipped
Release / Build & push elasticsearch (push) Has been skipped
Release / Build & push ftp (push) Has been skipped
Release / Build & push http (push) Has been skipped
Release / Build & push imap (push) Has been skipped
Release / Build & push k8s (push) Has been skipped
Release / Build & push ldap (push) Has been skipped
Release / Build & push llmnr (push) Has been skipped
Release / Build & push mongodb (push) Has been skipped
Release / Build & push mqtt (push) Has been skipped
Release / Build & push mssql (push) Has been skipped
Release / Build & push mysql (push) Has been skipped
Release / Build & push pop3 (push) Has been skipped
Release / Build & push postgres (push) Has been skipped
Release / Build & push rdp (push) Has been skipped
Release / Build & push real_ssh (push) Has been skipped
Release / Build & push redis (push) Has been skipped
Release / Build & push sip (push) Has been skipped
Release / Build & push smb (push) Has been skipped
Release / Build & push smtp (push) Has been skipped
Release / Build & push snmp (push) Has been skipped
Release / Build & push tftp (push) Has been skipped
Release / Build & push vnc (push) Has been skipped
2026-04-04 17:16:53 -03:00
7ad7e1e53b main: remove tests and pytest dependency 2026-04-04 16:28:33 -03:00
27 changed files with 86 additions and 1443 deletions

View File

@@ -33,13 +33,13 @@ jobs:
id: version id: version
run: | run: |
# Calculate next version (v0.x) # Calculate next version (v0.x)
LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0.0") LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0")
NEXT_VER=$(python3 -c " NEXT_VER=$(python3 -c "
tag = '$LATEST_TAG'.lstrip('v') tag = '$LATEST_TAG'.lstrip('v')
parts = tag.split('.') parts = tag.split('.')
major = int(parts[0]) if parts[0] else 0 major = int(parts[0]) if parts[0] else 0
minor = int(parts[1]) if len(parts) > 1 else 0 minor = int(parts[1]) if len(parts) > 1 else 0
print(f'{major}.{minor + 1}.0') print(f'{major}.{minor + 1}')
") ")
echo "Next version: $NEXT_VER (calculated from $LATEST_TAG)" echo "Next version: $NEXT_VER (calculated from $LATEST_TAG)"
@@ -49,11 +49,7 @@ jobs:
git add pyproject.toml git add pyproject.toml
git commit -m "chore: auto-release v$NEXT_VER [skip ci]" || echo "No changes to commit" git commit -m "chore: auto-release v$NEXT_VER [skip ci]" || echo "No changes to commit"
CHANGELOG=$(git log ${LATEST_TAG}..HEAD --oneline --no-decorate --no-merges) git tag -a "v$NEXT_VER" -m "Auto-release v$NEXT_VER"
git tag -a "v$NEXT_VER" -m "Auto-release v$NEXT_VER
Changes since $LATEST_TAG:
$CHANGELOG"
git push origin main --follow-tags git push origin main --follow-tags
echo "version=$NEXT_VER" >> $GITHUB_OUTPUT echo "version=$NEXT_VER" >> $GITHUB_OUTPUT
@@ -115,13 +111,13 @@ $CHANGELOG"
cache-from: type=gha cache-from: type=gha
cache-to: type=gha,mode=max cache-to: type=gha,mode=max
- name: Install Trivy
run: |
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin
- name: Scan with Trivy - name: Scan with Trivy
run: | uses: aquasecurity/trivy-action@master
trivy image --exit-code 1 --severity CRITICAL --ignore-unfixed decnet-${{ matrix.service }}:scan with:
image-ref: decnet-${{ matrix.service }}:scan
exit-code: "1"
severity: CRITICAL
ignore-unfixed: true
- name: Push image - name: Push image
if: success() if: success()

View File

@@ -180,6 +180,7 @@ Archetypes are pre-packaged machine identities. One slug sets services, preferre
| Slug | Services | OS Fingerprint | Description | | Slug | Services | OS Fingerprint | Description |
|---|---|---|---| |---|---|---|---|
| `deaddeck` | ssh | linux | Initial machine to be exploited. Real SSH container. |
| `windows-workstation` | smb, rdp | windows | Corporate Windows desktop | | `windows-workstation` | smb, rdp | windows | Corporate Windows desktop |
| `windows-server` | smb, rdp, ldap | windows | Windows domain member | | `windows-server` | smb, rdp, ldap | windows | Windows domain member |
| `domain-controller` | ldap, smb, rdp, llmnr | windows | Active Directory DC | | `domain-controller` | ldap, smb, rdp, llmnr | windows | Active Directory DC |
@@ -270,6 +271,11 @@ List live at any time with `decnet services`.
Most services accept persona configuration to make honeypot responses more convincing. Config is passed via INI subsections (`[decky-name.service]`) or the `service_config` field in code. Most services accept persona configuration to make honeypot responses more convincing. Config is passed via INI subsections (`[decky-name.service]`) or the `service_config` field in code.
```ini ```ini
[deaddeck-1]
amount=1
archetype=deaddeck
ssh.password=admin
[decky-webmail.http] [decky-webmail.http]
server_header = Apache/2.4.54 (Debian) server_header = Apache/2.4.54 (Debian)
fake_app = wordpress fake_app = wordpress

View File

@@ -15,7 +15,6 @@ import typer
from rich.console import Console from rich.console import Console
from rich.table import Table from rich.table import Table
from decnet.logging import get_logger
from decnet.env import ( from decnet.env import (
DECNET_API_HOST, DECNET_API_HOST,
DECNET_API_PORT, DECNET_API_PORT,
@@ -33,8 +32,6 @@ from decnet.ini_loader import load_ini
from decnet.network import detect_interface, detect_subnet, allocate_ips, get_host_ip from decnet.network import detect_interface, detect_subnet, allocate_ips, get_host_ip
from decnet.services.registry import all_services from decnet.services.registry import all_services
log = get_logger("cli")
app = typer.Typer( app = typer.Typer(
name="decnet", name="decnet",
help="Deploy a deception network of honeypot deckies on your LAN.", help="Deploy a deception network of honeypot deckies on your LAN.",
@@ -80,7 +77,6 @@ def api(
import sys import sys
import os import os
log.info("API command invoked host=%s port=%d", host, port)
console.print(f"[green]Starting DECNET API on {host}:{port}...[/]") console.print(f"[green]Starting DECNET API on {host}:{port}...[/]")
_env: dict[str, str] = os.environ.copy() _env: dict[str, str] = os.environ.copy()
_env["DECNET_INGEST_LOG_FILE"] = str(log_file) _env["DECNET_INGEST_LOG_FILE"] = str(log_file)
@@ -119,7 +115,6 @@ def deploy(
) -> None: ) -> None:
"""Deploy deckies to the LAN.""" """Deploy deckies to the LAN."""
import os import os
log.info("deploy command invoked mode=%s deckies=%s dry_run=%s", mode, deckies, dry_run)
if mode not in ("unihost", "swarm"): if mode not in ("unihost", "swarm"):
console.print("[red]--mode must be 'unihost' or 'swarm'[/]") console.print("[red]--mode must be 'unihost' or 'swarm'[/]")
raise typer.Exit(1) raise typer.Exit(1)
@@ -239,13 +234,8 @@ def deploy(
mutate_interval=mutate_interval, mutate_interval=mutate_interval,
) )
log.debug("deploy: config built deckies=%d interface=%s subnet=%s", len(config.deckies), config.interface, config.subnet)
from decnet.engine import deploy as _deploy from decnet.engine import deploy as _deploy
_deploy(config, dry_run=dry_run, no_cache=no_cache, parallel=parallel) _deploy(config, dry_run=dry_run, no_cache=no_cache, parallel=parallel)
if dry_run:
log.info("deploy: dry-run complete, no containers started")
else:
log.info("deploy: deployment complete deckies=%d", len(config.deckies))
if mutate_interval is not None and not dry_run: if mutate_interval is not None and not dry_run:
import subprocess # nosec B404 import subprocess # nosec B404
@@ -300,7 +290,6 @@ def collect(
"""Stream Docker logs from all running decky service containers to a log file.""" """Stream Docker logs from all running decky service containers to a log file."""
import asyncio import asyncio
from decnet.collector import log_collector_worker from decnet.collector import log_collector_worker
log.info("collect command invoked log_file=%s", log_file)
console.print(f"[bold cyan]Collector starting[/] → {log_file}") console.print(f"[bold cyan]Collector starting[/] → {log_file}")
asyncio.run(log_collector_worker(log_file)) asyncio.run(log_collector_worker(log_file))
@@ -333,7 +322,6 @@ def mutate(
@app.command() @app.command()
def status() -> None: def status() -> None:
"""Show running deckies and their status.""" """Show running deckies and their status."""
log.info("status command invoked")
from decnet.engine import status as _status from decnet.engine import status as _status
_status() _status()
@@ -348,10 +336,8 @@ def teardown(
console.print("[red]Specify --all or --id <name>.[/]") console.print("[red]Specify --all or --id <name>.[/]")
raise typer.Exit(1) raise typer.Exit(1)
log.info("teardown command invoked all=%s id=%s", all_, id_)
from decnet.engine import teardown as _teardown from decnet.engine import teardown as _teardown
_teardown(decky_id=id_) _teardown(decky_id=id_)
log.info("teardown complete all=%s id=%s", all_, id_)
if all_: if all_:
_kill_api() _kill_api()

View File

@@ -8,14 +8,13 @@ The ingester tails the .json file; rsyslog can consume the .log file independent
import asyncio import asyncio
import json import json
import logging
import re import re
from datetime import datetime from datetime import datetime
from pathlib import Path from pathlib import Path
from typing import Any, Optional from typing import Any, Optional
from decnet.logging import get_logger logger = logging.getLogger("decnet.collector")
logger = get_logger("collector")
# ─── RFC 5424 parser ────────────────────────────────────────────────────────── # ─── RFC 5424 parser ──────────────────────────────────────────────────────────
@@ -140,13 +139,10 @@ def _stream_container(container_id: str, log_path: Path, json_path: Path) -> Non
lf.flush() lf.flush()
parsed = parse_rfc5424(line) parsed = parse_rfc5424(line)
if parsed: if parsed:
logger.debug("collector: event written decky=%s type=%s", parsed.get("decky"), parsed.get("event_type"))
jf.write(json.dumps(parsed) + "\n") jf.write(json.dumps(parsed) + "\n")
jf.flush() jf.flush()
else:
logger.debug("collector: malformed RFC5424 line snippet=%r", line[:80])
except Exception as exc: except Exception as exc:
logger.debug("collector: log stream ended container_id=%s reason=%s", container_id, exc) logger.debug("Log stream ended for container %s: %s", container_id, exc)
# ─── Async collector ────────────────────────────────────────────────────────── # ─── Async collector ──────────────────────────────────────────────────────────
@@ -174,10 +170,9 @@ async def log_collector_worker(log_file: str) -> None:
asyncio.to_thread(_stream_container, container_id, log_path, json_path), asyncio.to_thread(_stream_container, container_id, log_path, json_path),
loop=loop, loop=loop,
) )
logger.info("collector: streaming container=%s", container_name) logger.info("Collecting logs from container: %s", container_name)
try: try:
logger.info("collector started log_path=%s", log_path)
client = docker.from_env() client = docker.from_env()
for container in client.containers.list(): for container in client.containers.list():
@@ -198,9 +193,8 @@ async def log_collector_worker(log_file: str) -> None:
await asyncio.to_thread(_watch_events) await asyncio.to_thread(_watch_events)
except asyncio.CancelledError: except asyncio.CancelledError:
logger.info("collector shutdown requested cancelling %d tasks", len(active))
for task in active.values(): for task in active.values():
task.cancel() task.cancel()
raise raise
except Exception as exc: except Exception as exc:
logger.error("collector error: %s", exc) logger.error("Collector error: %s", exc)

View File

@@ -48,9 +48,8 @@ class Rfc5424Formatter(logging.Formatter):
msg = record.getMessage() msg = record.getMessage()
if record.exc_info: if record.exc_info:
msg += "\n" + self.formatException(record.exc_info) msg += "\n" + self.formatException(record.exc_info)
app = getattr(record, "decnet_component", self._app)
return ( return (
f"<{prival}>1 {ts} {self._hostname} {app}" f"<{prival}>1 {ts} {self._hostname} {self._app}"
f" {os.getpid()} {record.name} - {msg}" f" {os.getpid()} {record.name} - {msg}"
) )

View File

@@ -11,7 +11,6 @@ import docker
from rich.console import Console from rich.console import Console
from rich.table import Table from rich.table import Table
from decnet.logging import get_logger
from decnet.config import DecnetConfig, clear_state, load_state, save_state from decnet.config import DecnetConfig, clear_state, load_state, save_state
from decnet.composer import write_compose from decnet.composer import write_compose
from decnet.network import ( from decnet.network import (
@@ -27,7 +26,6 @@ from decnet.network import (
teardown_host_macvlan, teardown_host_macvlan,
) )
log = get_logger("engine")
console = Console() console = Console()
COMPOSE_FILE = Path("decnet-compose.yml") COMPOSE_FILE = Path("decnet-compose.yml")
_CANONICAL_LOGGING = Path(__file__).parent.parent.parent / "templates" / "decnet_logging.py" _CANONICAL_LOGGING = Path(__file__).parent.parent.parent / "templates" / "decnet_logging.py"
@@ -108,14 +106,11 @@ def _compose_with_retry(
def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False, parallel: bool = False) -> None: def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False, parallel: bool = False) -> None:
log.info("deployment started n_deckies=%d interface=%s subnet=%s dry_run=%s", len(config.deckies), config.interface, config.subnet, dry_run)
log.debug("deploy: deckies=%s", [d.name for d in config.deckies])
client = docker.from_env() client = docker.from_env()
ip_list = [d.ip for d in config.deckies] ip_list = [d.ip for d in config.deckies]
decky_range = ips_to_range(ip_list) decky_range = ips_to_range(ip_list)
host_ip = get_host_ip(config.interface) host_ip = get_host_ip(config.interface)
log.debug("deploy: ip_range=%s host_ip=%s", decky_range, host_ip)
net_driver = "IPvlan L2" if config.ipvlan else "MACVLAN" net_driver = "IPvlan L2" if config.ipvlan else "MACVLAN"
console.print(f"[bold cyan]Creating {net_driver} network[/] ({MACVLAN_NETWORK_NAME}) on {config.interface}") console.print(f"[bold cyan]Creating {net_driver} network[/] ({MACVLAN_NETWORK_NAME}) on {config.interface}")
@@ -145,7 +140,6 @@ def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False,
console.print(f"[bold cyan]Compose file written[/] → {compose_path}") console.print(f"[bold cyan]Compose file written[/] → {compose_path}")
if dry_run: if dry_run:
log.info("deployment dry-run complete compose_path=%s", compose_path)
console.print("[yellow]Dry run — no containers started.[/]") console.print("[yellow]Dry run — no containers started.[/]")
return return
@@ -167,15 +161,12 @@ def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False,
_compose_with_retry("build", "--no-cache", compose_file=compose_path) _compose_with_retry("build", "--no-cache", compose_file=compose_path)
_compose_with_retry("up", "--build", "-d", compose_file=compose_path) _compose_with_retry("up", "--build", "-d", compose_file=compose_path)
log.info("deployment complete n_deckies=%d", len(config.deckies))
_print_status(config) _print_status(config)
def teardown(decky_id: str | None = None) -> None: def teardown(decky_id: str | None = None) -> None:
log.info("teardown requested decky_id=%s", decky_id or "all")
state = load_state() state = load_state()
if state is None: if state is None:
log.warning("teardown: no active deployment found")
console.print("[red]No active deployment found (no decnet-state.json).[/]") console.print("[red]No active deployment found (no decnet-state.json).[/]")
return return
@@ -202,7 +193,6 @@ def teardown(decky_id: str | None = None) -> None:
clear_state() clear_state()
net_driver = "IPvlan" if config.ipvlan else "MACVLAN" net_driver = "IPvlan" if config.ipvlan else "MACVLAN"
log.info("teardown complete all deckies removed network_driver=%s", net_driver)
console.print(f"[green]All deckies torn down. {net_driver} network removed.[/]") console.print(f"[green]All deckies torn down. {net_driver} network removed.[/]")

View File

@@ -1,42 +0,0 @@
"""
DECNET application logging helpers.
Usage:
from decnet.logging import get_logger
log = get_logger("engine") # APP-NAME in RFC 5424 output becomes "engine"
The returned logger propagates to the root logger (configured in config.py with
Rfc5424Formatter), so level control via DECNET_DEVELOPER still applies globally.
"""
from __future__ import annotations
import logging
class _ComponentFilter(logging.Filter):
"""Injects *decnet_component* onto every LogRecord so Rfc5424Formatter can
use it as the RFC 5424 APP-NAME field instead of the hardcoded "decnet"."""
def __init__(self, component: str) -> None:
super().__init__()
self.component = component
def filter(self, record: logging.LogRecord) -> bool:
record.decnet_component = self.component # type: ignore[attr-defined]
return True
def get_logger(component: str) -> logging.Logger:
"""Return a named logger that self-identifies as *component* in RFC 5424.
Valid components: cli, engine, api, mutator, collector.
The logger is named ``decnet.<component>`` and propagates normally, so the
root handler (Rfc5424Formatter + level gate from DECNET_DEVELOPER) handles
output. Calling this function multiple times for the same component is safe.
"""
logger = logging.getLogger(f"decnet.{component}")
if not any(isinstance(f, _ComponentFilter) for f in logger.filters):
logger.addFilter(_ComponentFilter(component))
return logger

View File

@@ -14,14 +14,12 @@ from decnet.fleet import all_service_names
from decnet.composer import write_compose from decnet.composer import write_compose
from decnet.config import DeckyConfig, DecnetConfig from decnet.config import DeckyConfig, DecnetConfig
from decnet.engine import _compose_with_retry from decnet.engine import _compose_with_retry
from decnet.logging import get_logger
from pathlib import Path from pathlib import Path
import anyio import anyio
import asyncio import asyncio
from decnet.web.db.repository import BaseRepository from decnet.web.db.repository import BaseRepository
log = get_logger("mutator")
console = Console() console = Console()
@@ -30,10 +28,8 @@ async def mutate_decky(decky_name: str, repo: BaseRepository) -> bool:
Perform an Intra-Archetype Shuffle for a specific decky. Perform an Intra-Archetype Shuffle for a specific decky.
Returns True if mutation succeeded, False otherwise. Returns True if mutation succeeded, False otherwise.
""" """
log.debug("mutate_decky: start decky=%s", decky_name)
state_dict = await repo.get_state("deployment") state_dict = await repo.get_state("deployment")
if state_dict is None: if state_dict is None:
log.error("mutate_decky: no active deployment found in database")
console.print("[red]No active deployment found in database.[/]") console.print("[red]No active deployment found in database.[/]")
return False return False
@@ -77,14 +73,12 @@ async def mutate_decky(decky_name: str, repo: BaseRepository) -> bool:
# Still writes files for Docker to use # Still writes files for Docker to use
write_compose(config, compose_path) write_compose(config, compose_path)
log.info("mutation applied decky=%s services=%s", decky_name, ",".join(decky.services))
console.print(f"[cyan]Mutating '{decky_name}' to services: {', '.join(decky.services)}[/]") console.print(f"[cyan]Mutating '{decky_name}' to services: {', '.join(decky.services)}[/]")
try: try:
# Wrap blocking call in thread # Wrap blocking call in thread
await anyio.to_thread.run_sync(_compose_with_retry, "up", "-d", "--remove-orphans", compose_path) await anyio.to_thread.run_sync(_compose_with_retry, "up", "-d", "--remove-orphans", compose_path)
except Exception as e: except Exception as e:
log.error("mutation failed decky=%s error=%s", decky_name, e)
console.print(f"[red]Failed to mutate '{decky_name}': {e}[/]") console.print(f"[red]Failed to mutate '{decky_name}': {e}[/]")
return False return False
@@ -96,10 +90,8 @@ async def mutate_all(repo: BaseRepository, force: bool = False) -> None:
Check all deckies and mutate those that are due. Check all deckies and mutate those that are due.
If force=True, mutates all deckies regardless of schedule. If force=True, mutates all deckies regardless of schedule.
""" """
log.debug("mutate_all: start force=%s", force)
state_dict = await repo.get_state("deployment") state_dict = await repo.get_state("deployment")
if state_dict is None: if state_dict is None:
log.error("mutate_all: no active deployment found")
console.print("[red]No active deployment found.[/]") console.print("[red]No active deployment found.[/]")
return return
@@ -124,20 +116,15 @@ async def mutate_all(repo: BaseRepository, force: bool = False) -> None:
mutated_count += 1 mutated_count += 1
if mutated_count == 0 and not force: if mutated_count == 0 and not force:
log.debug("mutate_all: no deckies due for mutation")
console.print("[dim]No deckies are due for mutation.[/]") console.print("[dim]No deckies are due for mutation.[/]")
else:
log.info("mutate_all: complete mutated_count=%d", mutated_count)
async def run_watch_loop(repo: BaseRepository, poll_interval_secs: int = 10) -> None: async def run_watch_loop(repo: BaseRepository, poll_interval_secs: int = 10) -> None:
"""Run an infinite loop checking for deckies that need mutation.""" """Run an infinite loop checking for deckies that need mutation."""
log.info("mutator watch loop started poll_interval_secs=%d", poll_interval_secs)
console.print(f"[green]DECNET Mutator Watcher started (polling every {poll_interval_secs}s).[/]") console.print(f"[green]DECNET Mutator Watcher started (polling every {poll_interval_secs}s).[/]")
try: try:
while True: while True:
await mutate_all(force=False, repo=repo) await mutate_all(force=False, repo=repo)
await asyncio.sleep(poll_interval_secs) await asyncio.sleep(poll_interval_secs)
except KeyboardInterrupt: except KeyboardInterrupt:
log.info("mutator watch loop stopped")
console.print("\n[dim]Mutator watcher stopped.[/]") console.print("\n[dim]Mutator watcher stopped.[/]")

View File

@@ -1,4 +1,5 @@
import asyncio import asyncio
import logging
import os import os
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from typing import Any, AsyncGenerator, Optional from typing import Any, AsyncGenerator, Optional
@@ -10,13 +11,12 @@ from pydantic import ValidationError
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from decnet.env import DECNET_CORS_ORIGINS, DECNET_DEVELOPER, DECNET_INGEST_LOG_FILE from decnet.env import DECNET_CORS_ORIGINS, DECNET_DEVELOPER, DECNET_INGEST_LOG_FILE
from decnet.logging import get_logger
from decnet.web.dependencies import repo from decnet.web.dependencies import repo
from decnet.collector import log_collector_worker from decnet.collector import log_collector_worker
from decnet.web.ingester import log_ingestion_worker from decnet.web.ingester import log_ingestion_worker
from decnet.web.router import api_router from decnet.web.router import api_router
log = get_logger("api") log = logging.getLogger(__name__)
ingestion_task: Optional[asyncio.Task[Any]] = None ingestion_task: Optional[asyncio.Task[Any]] = None
collector_task: Optional[asyncio.Task[Any]] = None collector_task: Optional[asyncio.Task[Any]] = None
@@ -25,11 +25,9 @@ collector_task: Optional[asyncio.Task[Any]] = None
async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]: async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
global ingestion_task, collector_task global ingestion_task, collector_task
log.info("API startup initialising database")
for attempt in range(1, 6): for attempt in range(1, 6):
try: try:
await repo.initialize() await repo.initialize()
log.debug("API startup DB initialised attempt=%d", attempt)
break break
except Exception as exc: except Exception as exc:
log.warning("DB init attempt %d/5 failed: %s", attempt, exc) log.warning("DB init attempt %d/5 failed: %s", attempt, exc)
@@ -42,13 +40,11 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
# Start background ingestion task # Start background ingestion task
if ingestion_task is None or ingestion_task.done(): if ingestion_task is None or ingestion_task.done():
ingestion_task = asyncio.create_task(log_ingestion_worker(repo)) ingestion_task = asyncio.create_task(log_ingestion_worker(repo))
log.debug("API startup ingest worker started")
# Start Docker log collector (writes to log file; ingester reads from it) # Start Docker log collector (writes to log file; ingester reads from it)
_log_file = os.environ.get("DECNET_INGEST_LOG_FILE", DECNET_INGEST_LOG_FILE) _log_file = os.environ.get("DECNET_INGEST_LOG_FILE", DECNET_INGEST_LOG_FILE)
if _log_file and (collector_task is None or collector_task.done()): if _log_file and (collector_task is None or collector_task.done()):
collector_task = asyncio.create_task(log_collector_worker(_log_file)) collector_task = asyncio.create_task(log_collector_worker(_log_file))
log.debug("API startup collector worker started log_file=%s", _log_file)
elif not _log_file: elif not _log_file:
log.warning("DECNET_INGEST_LOG_FILE not set — Docker log collection disabled.") log.warning("DECNET_INGEST_LOG_FILE not set — Docker log collection disabled.")
else: else:
@@ -56,7 +52,7 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
yield yield
log.info("API shutdown cancelling background tasks") # Shutdown background tasks
for task in (ingestion_task, collector_task): for task in (ingestion_task, collector_task):
if task and not task.done(): if task and not task.done():
task.cancel() task.cancel()
@@ -66,7 +62,6 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
pass pass
except Exception as exc: except Exception as exc:
log.warning("Task shutdown error: %s", exc) log.warning("Task shutdown error: %s", exc)
log.info("API shutdown complete")
app: FastAPI = FastAPI( app: FastAPI = FastAPI(

View File

@@ -226,6 +226,11 @@ class SQLiteRepository(BaseRepository):
select(func.count(func.distinct(Log.attacker_ip))) select(func.count(func.distinct(Log.attacker_ip)))
) )
).scalar() or 0 ).scalar() or 0
active_deckies = (
await session.execute(
select(func.count(func.distinct(Log.decky)))
)
).scalar() or 0
_state = await asyncio.to_thread(load_state) _state = await asyncio.to_thread(load_state)
deployed_deckies = len(_state[0].deckies) if _state else 0 deployed_deckies = len(_state[0].deckies) if _state else 0
@@ -233,7 +238,7 @@ class SQLiteRepository(BaseRepository):
return { return {
"total_logs": total_logs, "total_logs": total_logs,
"unique_attackers": unique_attackers, "unique_attackers": unique_attackers,
"active_deckies": deployed_deckies, "active_deckies": active_deckies,
"deployed_deckies": deployed_deckies, "deployed_deckies": deployed_deckies,
} }

View File

@@ -1,13 +1,13 @@
import asyncio import asyncio
import os import os
import logging
import json import json
from typing import Any from typing import Any
from pathlib import Path from pathlib import Path
from decnet.logging import get_logger
from decnet.web.db.repository import BaseRepository from decnet.web.db.repository import BaseRepository
logger = get_logger("api") logger: logging.Logger = logging.getLogger("decnet.web.ingester")
async def log_ingestion_worker(repo: BaseRepository) -> None: async def log_ingestion_worker(repo: BaseRepository) -> None:
""" """
@@ -22,7 +22,7 @@ async def log_ingestion_worker(repo: BaseRepository) -> None:
_json_log_path: Path = Path(_base_log_file).with_suffix(".json") _json_log_path: Path = Path(_base_log_file).with_suffix(".json")
_position: int = 0 _position: int = 0
logger.info("ingest worker started path=%s", _json_log_path) logger.info(f"Starting JSON log ingestion from {_json_log_path}")
while True: while True:
try: try:
@@ -53,11 +53,10 @@ async def log_ingestion_worker(repo: BaseRepository) -> None:
try: try:
_log_data: dict[str, Any] = json.loads(_line.strip()) _log_data: dict[str, Any] = json.loads(_line.strip())
logger.debug("ingest: record decky=%s event_type=%s", _log_data.get("decky"), _log_data.get("event_type"))
await repo.add_log(_log_data) await repo.add_log(_log_data)
await _extract_bounty(repo, _log_data) await _extract_bounty(repo, _log_data)
except json.JSONDecodeError: except json.JSONDecodeError:
logger.error("ingest: failed to decode JSON log line: %s", _line.strip()) logger.error(f"Failed to decode JSON log line: {_line}")
continue continue
# Update position after successful line read # Update position after successful line read
@@ -66,10 +65,10 @@ async def log_ingestion_worker(repo: BaseRepository) -> None:
except Exception as _e: except Exception as _e:
_err_str = str(_e).lower() _err_str = str(_e).lower()
if "no such table" in _err_str or "no active connection" in _err_str or "connection closed" in _err_str: if "no such table" in _err_str or "no active connection" in _err_str or "connection closed" in _err_str:
logger.error("ingest: post-shutdown or fatal DB error: %s", _e) logger.error(f"Post-shutdown or fatal DB error in ingester: {_e}")
break # Exit worker — DB is gone or uninitialized break # Exit worker — DB is gone or uninitialized
logger.error("ingest: error in worker: %s", _e) logger.error(f"Error in log ingestion worker: {_e}")
await asyncio.sleep(5) await asyncio.sleep(5)
await asyncio.sleep(1) await asyncio.sleep(1)
@@ -97,36 +96,4 @@ async def _extract_bounty(repo: BaseRepository, log_data: dict[str, Any]) -> Non
} }
}) })
# 2. HTTP User-Agent fingerprint # 2. Add more extractors here later (e.g. file hashes, crypto keys)
_headers = _fields.get("headers") if isinstance(_fields.get("headers"), dict) else {}
_ua = _headers.get("User-Agent") or _headers.get("user-agent")
if _ua:
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": log_data.get("service"),
"attacker_ip": log_data.get("attacker_ip"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "http_useragent",
"value": _ua,
"method": _fields.get("method"),
"path": _fields.get("path"),
}
})
# 3. VNC client version fingerprint
_vnc_ver = _fields.get("client_version")
if _vnc_ver and log_data.get("event_type") == "version":
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": log_data.get("service"),
"attacker_ip": log_data.get("attacker_ip"),
"bounty_type": "fingerprint",
"payload": {
"fingerprint_type": "vnc_client_version",
"value": _vnc_ver,
}
})
# 4. SSH client banner fingerprint (deferred — requires asyncssh server)
# Fires on: service=ssh, event_type=client_banner, fields.client_banner

View File

@@ -1,17 +1,15 @@
import logging
import os import os
from fastapi import APIRouter, Depends, HTTPException from fastapi import APIRouter, Depends, HTTPException
from decnet.logging import get_logger from decnet.config import DEFAULT_MUTATE_INTERVAL, DecnetConfig, _ROOT, log
from decnet.config import DEFAULT_MUTATE_INTERVAL, DecnetConfig, _ROOT
from decnet.engine import deploy as _deploy from decnet.engine import deploy as _deploy
from decnet.ini_loader import load_ini_from_string from decnet.ini_loader import load_ini_from_string
from decnet.network import detect_interface, detect_subnet, get_host_ip from decnet.network import detect_interface, detect_subnet, get_host_ip
from decnet.web.dependencies import get_current_user, repo from decnet.web.dependencies import get_current_user, repo
from decnet.web.db.models import DeployIniRequest from decnet.web.db.models import DeployIniRequest
log = get_logger("api")
router = APIRouter() router = APIRouter()
@@ -102,7 +100,7 @@ async def api_deploy_deckies(req: DeployIniRequest, current_user: str = Depends(
} }
await repo.set_state("deployment", new_state_payload) await repo.set_state("deployment", new_state_payload)
except Exception as e: except Exception as e:
log.exception("Deployment failed: %s", e) logging.getLogger("decnet.web.api").exception("Deployment failed: %s", e)
raise HTTPException(status_code=500, detail="Deployment failed. Check server logs for details.") raise HTTPException(status_code=500, detail="Deployment failed. Check server logs for details.")
return {"message": "Deckies deployed successfully"} return {"message": "Deckies deployed successfully"}

View File

@@ -1,15 +1,15 @@
import json import json
import asyncio import asyncio
import logging
from typing import AsyncGenerator, Optional from typing import AsyncGenerator, Optional
from fastapi import APIRouter, Depends, Query, Request from fastapi import APIRouter, Depends, Query, Request
from fastapi.responses import StreamingResponse from fastapi.responses import StreamingResponse
from decnet.env import DECNET_DEVELOPER from decnet.env import DECNET_DEVELOPER
from decnet.logging import get_logger
from decnet.web.dependencies import get_stream_user, repo from decnet.web.dependencies import get_stream_user, repo
log = get_logger("api") log = logging.getLogger(__name__)
router = APIRouter() router = APIRouter()

View File

@@ -9,30 +9,15 @@ import Attackers from './components/Attackers';
import Config from './components/Config'; import Config from './components/Config';
import Bounty from './components/Bounty'; import Bounty from './components/Bounty';
function isTokenValid(token: string): boolean {
try {
const payload = JSON.parse(atob(token.split('.')[1].replace(/-/g, '+').replace(/_/g, '/')));
return typeof payload.exp === 'number' && payload.exp * 1000 > Date.now();
} catch {
return false;
}
}
function getValidToken(): string | null {
const stored = localStorage.getItem('token');
if (stored && isTokenValid(stored)) return stored;
if (stored) localStorage.removeItem('token');
return null;
}
function App() { function App() {
const [token, setToken] = useState<string | null>(getValidToken); const [token, setToken] = useState<string | null>(localStorage.getItem('token'));
const [searchQuery, setSearchQuery] = useState(''); const [searchQuery, setSearchQuery] = useState('');
useEffect(() => { useEffect(() => {
const onAuthLogout = () => setToken(null); const savedToken = localStorage.getItem('token');
window.addEventListener('auth:logout', onAuthLogout); if (savedToken) {
return () => window.removeEventListener('auth:logout', onAuthLogout); setToken(savedToken);
}
}, []); }, []);
const handleLogin = (newToken: string) => { const handleLogin = (newToken: string) => {

View File

@@ -1,4 +1,4 @@
import React, { useEffect, useState, useRef } from 'react'; import React, { useEffect, useState } from 'react';
import './Dashboard.css'; import './Dashboard.css';
import { Shield, Users, Activity, Clock } from 'lucide-react'; import { Shield, Users, Activity, Clock } from 'lucide-react';
@@ -29,52 +29,37 @@ const Dashboard: React.FC<DashboardProps> = ({ searchQuery }) => {
const [stats, setStats] = useState<Stats | null>(null); const [stats, setStats] = useState<Stats | null>(null);
const [logs, setLogs] = useState<LogEntry[]>([]); const [logs, setLogs] = useState<LogEntry[]>([]);
const [loading, setLoading] = useState(true); const [loading, setLoading] = useState(true);
const eventSourceRef = useRef<EventSource | null>(null);
const reconnectTimerRef = useRef<ReturnType<typeof setTimeout> | null>(null);
useEffect(() => { useEffect(() => {
const connect = () => { const token = localStorage.getItem('token');
if (eventSourceRef.current) { const baseUrl = import.meta.env.VITE_API_URL || 'http://localhost:8000/api/v1';
eventSourceRef.current.close(); let url = `${baseUrl}/stream?token=${token}`;
} if (searchQuery) {
url += `&search=${encodeURIComponent(searchQuery)}`;
}
const token = localStorage.getItem('token'); const eventSource = new EventSource(url);
const baseUrl = import.meta.env.VITE_API_URL || 'http://localhost:8000/api/v1';
let url = `${baseUrl}/stream?token=${token}`;
if (searchQuery) {
url += `&search=${encodeURIComponent(searchQuery)}`;
}
const es = new EventSource(url); eventSource.onmessage = (event) => {
eventSourceRef.current = es; try {
const payload = JSON.parse(event.data);
es.onmessage = (event) => { if (payload.type === 'logs') {
try { setLogs(prev => [...payload.data, ...prev].slice(0, 100));
const payload = JSON.parse(event.data); } else if (payload.type === 'stats') {
if (payload.type === 'logs') { setStats(payload.data);
setLogs(prev => [...payload.data, ...prev].slice(0, 100)); setLoading(false);
} else if (payload.type === 'stats') {
setStats(payload.data);
setLoading(false);
window.dispatchEvent(new CustomEvent('decnet:stats', { detail: payload.data }));
}
} catch (err) {
console.error('Failed to parse SSE payload', err);
} }
}; } catch (err) {
console.error('Failed to parse SSE payload', err);
es.onerror = () => { }
es.close();
eventSourceRef.current = null;
reconnectTimerRef.current = setTimeout(connect, 3000);
};
}; };
connect(); eventSource.onerror = (err) => {
console.error('SSE connection error, attempting to reconnect...', err);
};
return () => { return () => {
if (reconnectTimerRef.current) clearTimeout(reconnectTimerRef.current); eventSource.close();
if (eventSourceRef.current) eventSourceRef.current.close();
}; };
}, [searchQuery]); }, [searchQuery]);

View File

@@ -1,6 +1,7 @@
import React, { useState, useEffect } from 'react'; import React, { useState, useEffect } from 'react';
import { NavLink } from 'react-router-dom'; import { NavLink } from 'react-router-dom';
import { Menu, X, Search, Activity, LayoutDashboard, Terminal, Settings, LogOut, Server, Archive } from 'lucide-react'; import { Menu, X, Search, Activity, LayoutDashboard, Terminal, Settings, LogOut, Server, Archive } from 'lucide-react';
import api from '../utils/api';
import './Layout.css'; import './Layout.css';
interface LayoutProps { interface LayoutProps {
@@ -20,12 +21,17 @@ const Layout: React.FC<LayoutProps> = ({ children, onLogout, onSearch }) => {
}; };
useEffect(() => { useEffect(() => {
const onStats = (e: Event) => { const fetchStatus = async () => {
const stats = (e as CustomEvent).detail; try {
setSystemActive(stats.deployed_deckies > 0); const res = await api.get('/stats');
setSystemActive(res.data.deployed_deckies > 0);
} catch (err) {
console.error('Failed to fetch system status', err);
}
}; };
window.addEventListener('decnet:stats', onStats); fetchStatus();
return () => window.removeEventListener('decnet:stats', onStats); const interval = setInterval(fetchStatus, 10000);
return () => clearInterval(interval);
}, []); }, []);
return ( return (

View File

@@ -12,15 +12,4 @@ api.interceptors.request.use((config) => {
return config; return config;
}); });
api.interceptors.response.use(
(response) => response,
(error) => {
if (error.response?.status === 401) {
localStorage.removeItem('token');
window.dispatchEvent(new Event('auth:logout'));
}
return Promise.reject(error);
}
);
export default api; export default api;

View File

@@ -45,7 +45,7 @@
## Core / Hardening ## Core / Hardening
- [x] **Attacker fingerprinting**HTTP User-Agent and VNC client version stored as `fingerprint` bounties. TLS JA3/JA4 and TCP window sizes require pcap (out of scope). SSH client banner deferred pending asyncssh server. - [ ] **Attacker fingerprinting**Capture TLS JA3/JA4 hashes, TCP window sizes, User-Agent strings, and SSH client banners.
- [ ] **Canary tokens** — Embed fake AWS keys and honeydocs into decky filesystems. - [ ] **Canary tokens** — Embed fake AWS keys and honeydocs into decky filesystems.
- [ ] **Tarpit mode** — Slow down attackers by drip-feeding bytes or delaying responses. - [ ] **Tarpit mode** — Slow down attackers by drip-feeding bytes or delaying responses.
- [x] **Dynamic decky mutation** — Rotate exposed services or OS fingerprints over time. - [x] **Dynamic decky mutation** — Rotate exposed services or OS fingerprints over time.
@@ -66,7 +66,7 @@
- [x] **Web dashboard** — Real-time React SPA + FastAPI backend for logs and fleet status. - [x] **Web dashboard** — Real-time React SPA + FastAPI backend for logs and fleet status.
- [x] **Decky Inventory** — Dedicated "Decoy Fleet" page showing all deployed assets. - [x] **Decky Inventory** — Dedicated "Decoy Fleet" page showing all deployed assets.
- [ ] **Pre-built Kibana/Grafana dashboards** — Ship JSON exports for ELK/Grafana. - [ ] **Pre-built Kibana/Grafana dashboards** — Ship JSON exports for ELK/Grafana.
- [~] **CLI live feed**`decnet watch` — WON'T IMPLEMENT: redundant with `tail -f` on the existing log file; adds bloat without meaningful value. - [ ] **CLI live feed**`decnet watch` command for a unified, colored terminal stream.
- [x] **Traversal graph export** — Export attacker movement as JSON (via CLI). - [x] **Traversal graph export** — Export attacker movement as JSON (via CLI).
## Deployment & Infrastructure ## Deployment & Infrastructure

View File

@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project] [project]
name = "decnet" name = "decnet"
version = "0.1.0" version = "0.2"
description = "Deception network: deploy honeypot deckies that appear as real LAN hosts" description = "Deception network: deploy honeypot deckies that appear as real LAN hosts"
requires-python = ">=3.11" requires-python = ">=3.11"
dependencies = [ dependencies = [

View File

View File

@@ -12,7 +12,7 @@ Requires DECNET_DEVELOPER=true (set in tests/conftest.py) to expose /openapi.jso
""" """
import pytest import pytest
import schemathesis as st import schemathesis as st
from hypothesis import settings, Verbosity, HealthCheck from hypothesis import settings, Verbosity
from decnet.web.auth import create_access_token from decnet.web.auth import create_access_token
import subprocess import subprocess
@@ -102,6 +102,6 @@ schema = st.openapi.from_url(f"{LIVE_SERVER_URL}/openapi.json")
@pytest.mark.fuzz @pytest.mark.fuzz
@st.pytest.parametrize(api=schema) @st.pytest.parametrize(api=schema)
@settings(max_examples=3000, deadline=None, verbosity=Verbosity.debug, suppress_health_check=[HealthCheck.filter_too_much]) @settings(max_examples=3000, deadline=None, verbosity=Verbosity.debug)
def test_schema_compliance(case): def test_schema_compliance(case):
case.call_and_validate() case.call_and_validate()

View File

@@ -1,418 +0,0 @@
"""
Tests for the DECNET cross-decky correlation engine.
Covers:
- RFC 5424 line parsing (parser.py)
- Traversal graph data types (graph.py)
- CorrelationEngine ingestion, querying, and reporting (engine.py)
"""
from __future__ import annotations
import json
import re
from datetime import datetime
from decnet.correlation.parser import LogEvent, parse_line
from decnet.correlation.graph import AttackerTraversal, TraversalHop
from decnet.correlation.engine import CorrelationEngine, _fmt_duration
from decnet.logging.syslog_formatter import format_rfc5424, SEVERITY_INFO, SEVERITY_WARNING
# ---------------------------------------------------------------------------
# Fixtures & helpers
# ---------------------------------------------------------------------------
_TS = "2026-04-04T10:00:00+00:00"
_TS2 = "2026-04-04T10:05:00+00:00"
_TS3 = "2026-04-04T10:10:00+00:00"
def _make_line(
service: str = "http",
hostname: str = "decky-01",
event_type: str = "connection",
src_ip: str = "1.2.3.4",
timestamp: str = _TS,
extra_fields: dict | None = None,
) -> str:
"""Build a real RFC 5424 DECNET syslog line via the formatter."""
fields = {}
if src_ip:
fields["src_ip"] = src_ip
if extra_fields:
fields.update(extra_fields)
return format_rfc5424(
service=service,
hostname=hostname,
event_type=event_type,
severity=SEVERITY_INFO,
timestamp=datetime.fromisoformat(timestamp),
**fields,
)
def _make_line_src(hostname: str, src: str, timestamp: str = _TS) -> str:
"""Build a line that uses `src` instead of `src_ip` (mssql style)."""
return format_rfc5424(
service="mssql",
hostname=hostname,
event_type="unknown_packet",
severity=SEVERITY_INFO,
timestamp=datetime.fromisoformat(timestamp),
src=src,
)
# ---------------------------------------------------------------------------
# parser.py — parse_line
# ---------------------------------------------------------------------------
class TestParserBasic:
def test_returns_none_for_blank(self):
assert parse_line("") is None
assert parse_line(" ") is None
def test_returns_none_for_non_rfc5424(self):
assert parse_line("this is not a syslog line") is None
assert parse_line("Jan 1 00:00:00 host sshd: blah") is None
def test_returns_log_event(self):
event = parse_line(_make_line())
assert isinstance(event, LogEvent)
def test_hostname_extracted(self):
event = parse_line(_make_line(hostname="decky-07"))
assert event.decky == "decky-07"
def test_service_extracted(self):
event = parse_line(_make_line(service="ftp"))
assert event.service == "ftp"
def test_event_type_extracted(self):
event = parse_line(_make_line(event_type="login_attempt"))
assert event.event_type == "login_attempt"
def test_timestamp_parsed(self):
event = parse_line(_make_line(timestamp=_TS))
assert event.timestamp == datetime.fromisoformat(_TS)
def test_raw_line_preserved(self):
line = _make_line()
event = parse_line(line)
assert event.raw == line.strip()
class TestParserAttackerIP:
def test_src_ip_field(self):
event = parse_line(_make_line(src_ip="10.0.0.1"))
assert event.attacker_ip == "10.0.0.1"
def test_src_field_fallback(self):
"""mssql logs use `src` instead of `src_ip`."""
event = parse_line(_make_line_src("decky-win", "192.168.1.5"))
assert event.attacker_ip == "192.168.1.5"
def test_no_ip_field_gives_none(self):
line = format_rfc5424("http", "decky-01", "startup", SEVERITY_INFO)
event = parse_line(line)
assert event is not None
assert event.attacker_ip is None
def test_extra_fields_in_dict(self):
event = parse_line(_make_line(extra_fields={"username": "root", "password": "admin"}))
assert event.fields["username"] == "root"
assert event.fields["password"] == "admin"
def test_src_ip_priority_over_src(self):
"""src_ip should win when both are present."""
line = format_rfc5424(
"mssql", "decky-01", "evt", SEVERITY_INFO,
timestamp=datetime.fromisoformat(_TS),
src_ip="1.1.1.1",
src="2.2.2.2",
)
event = parse_line(line)
assert event.attacker_ip == "1.1.1.1"
def test_sd_escape_chars_decoded(self):
"""Escaped characters in SD values should be unescaped."""
line = format_rfc5424(
"http", "decky-01", "evt", SEVERITY_INFO,
timestamp=datetime.fromisoformat(_TS),
src_ip="1.2.3.4",
path='/search?q=a"b',
)
event = parse_line(line)
assert '"' in event.fields["path"]
def test_nilvalue_hostname_skipped(self):
line = format_rfc5424("-", "decky-01", "evt", SEVERITY_INFO)
assert parse_line(line) is None
def test_nilvalue_service_skipped(self):
line = format_rfc5424("http", "-", "evt", SEVERITY_INFO)
assert parse_line(line) is None
# ---------------------------------------------------------------------------
# graph.py — AttackerTraversal
# ---------------------------------------------------------------------------
def _make_traversal(ip: str, hops_spec: list[tuple]) -> AttackerTraversal:
"""hops_spec: list of (ts_str, decky, service, event_type)"""
hops = [
TraversalHop(
timestamp=datetime.fromisoformat(ts),
decky=decky,
service=svc,
event_type=evt,
)
for ts, decky, svc, evt in hops_spec
]
return AttackerTraversal(attacker_ip=ip, hops=hops)
class TestTraversalGraph:
def setup_method(self):
self.t = _make_traversal("5.6.7.8", [
(_TS, "decky-01", "ssh", "login_attempt"),
(_TS2, "decky-03", "http", "request"),
(_TS3, "decky-05", "ftp", "auth_attempt"),
])
def test_first_seen(self):
assert self.t.first_seen == datetime.fromisoformat(_TS)
def test_last_seen(self):
assert self.t.last_seen == datetime.fromisoformat(_TS3)
def test_duration_seconds(self):
assert self.t.duration_seconds == 600.0
def test_deckies_ordered(self):
assert self.t.deckies == ["decky-01", "decky-03", "decky-05"]
def test_decky_count(self):
assert self.t.decky_count == 3
def test_path_string(self):
assert self.t.path == "decky-01 → decky-03 → decky-05"
def test_to_dict_keys(self):
d = self.t.to_dict()
assert d["attacker_ip"] == "5.6.7.8"
assert d["decky_count"] == 3
assert d["hop_count"] == 3
assert len(d["hops"]) == 3
assert d["path"] == "decky-01 → decky-03 → decky-05"
def test_to_dict_hops_structure(self):
hop = self.t.to_dict()["hops"][0]
assert set(hop.keys()) == {"timestamp", "decky", "service", "event_type"}
def test_repeated_decky_not_double_counted_in_path(self):
t = _make_traversal("1.1.1.1", [
(_TS, "decky-01", "ssh", "conn"),
(_TS2, "decky-02", "ftp", "conn"),
(_TS3, "decky-01", "ssh", "conn"), # revisit
])
assert t.deckies == ["decky-01", "decky-02"]
assert t.decky_count == 2
# ---------------------------------------------------------------------------
# engine.py — CorrelationEngine
# ---------------------------------------------------------------------------
class TestEngineIngestion:
def test_ingest_returns_event(self):
engine = CorrelationEngine()
evt = engine.ingest(_make_line())
assert evt is not None
def test_ingest_blank_returns_none(self):
engine = CorrelationEngine()
assert engine.ingest("") is None
def test_lines_parsed_counter(self):
engine = CorrelationEngine()
engine.ingest(_make_line())
engine.ingest("garbage")
assert engine.lines_parsed == 2
def test_events_indexed_counter(self):
engine = CorrelationEngine()
engine.ingest(_make_line(src_ip="1.2.3.4"))
engine.ingest(_make_line(src_ip="")) # no IP
assert engine.events_indexed == 1
def test_ingest_file(self, tmp_path):
log = tmp_path / "decnet.log"
lines = [
_make_line("ssh", "decky-01", "conn", "10.0.0.1", _TS),
_make_line("http", "decky-02", "req", "10.0.0.1", _TS2),
_make_line("ftp", "decky-03", "auth", "10.0.0.1", _TS3),
]
log.write_text("\n".join(lines))
engine = CorrelationEngine()
count = engine.ingest_file(log)
assert count == 3
class TestEngineTraversals:
def _engine_with(self, specs: list[tuple]) -> CorrelationEngine:
"""specs: (service, decky, event_type, src_ip, timestamp)"""
engine = CorrelationEngine()
for svc, decky, evt, ip, ts in specs:
engine.ingest(_make_line(svc, decky, evt, ip, ts))
return engine
def test_single_decky_not_a_traversal(self):
engine = self._engine_with([
("ssh", "decky-01", "conn", "1.1.1.1", _TS),
("ssh", "decky-01", "conn", "1.1.1.1", _TS2),
])
assert engine.traversals() == []
def test_two_deckies_is_traversal(self):
engine = self._engine_with([
("ssh", "decky-01", "conn", "1.1.1.1", _TS),
("http", "decky-02", "req", "1.1.1.1", _TS2),
])
t = engine.traversals()
assert len(t) == 1
assert t[0].attacker_ip == "1.1.1.1"
assert t[0].decky_count == 2
def test_min_deckies_filter(self):
engine = self._engine_with([
("ssh", "decky-01", "conn", "1.1.1.1", _TS),
("http", "decky-02", "req", "1.1.1.1", _TS2),
("ftp", "decky-03", "auth", "1.1.1.1", _TS3),
])
assert len(engine.traversals(min_deckies=3)) == 1
assert len(engine.traversals(min_deckies=4)) == 0
def test_multiple_attackers_separate_traversals(self):
engine = self._engine_with([
("ssh", "decky-01", "conn", "1.1.1.1", _TS),
("http", "decky-02", "req", "1.1.1.1", _TS2),
("ssh", "decky-03", "conn", "9.9.9.9", _TS),
("ftp", "decky-04", "auth", "9.9.9.9", _TS2),
])
traversals = engine.traversals()
assert len(traversals) == 2
ips = {t.attacker_ip for t in traversals}
assert ips == {"1.1.1.1", "9.9.9.9"}
def test_traversals_sorted_by_first_seen(self):
engine = self._engine_with([
("ssh", "decky-01", "conn", "9.9.9.9", _TS2), # later
("ftp", "decky-02", "auth", "9.9.9.9", _TS3),
("http", "decky-03", "req", "1.1.1.1", _TS), # earlier
("smb", "decky-04", "auth", "1.1.1.1", _TS2),
])
traversals = engine.traversals()
assert traversals[0].attacker_ip == "1.1.1.1"
assert traversals[1].attacker_ip == "9.9.9.9"
def test_hops_ordered_chronologically(self):
engine = self._engine_with([
("ftp", "decky-02", "auth", "5.5.5.5", _TS2), # ingested first but later ts
("ssh", "decky-01", "conn", "5.5.5.5", _TS),
])
t = engine.traversals()[0]
assert t.hops[0].decky == "decky-01"
assert t.hops[1].decky == "decky-02"
def test_all_attackers(self):
engine = self._engine_with([
("ssh", "decky-01", "conn", "1.1.1.1", _TS),
("ssh", "decky-01", "conn", "1.1.1.1", _TS2),
("ssh", "decky-01", "conn", "2.2.2.2", _TS),
])
attackers = engine.all_attackers()
assert attackers["1.1.1.1"] == 2
assert attackers["2.2.2.2"] == 1
def test_mssql_src_field_correlated(self):
"""Verify that `src=` (mssql style) is picked up for cross-decky correlation."""
engine = CorrelationEngine()
engine.ingest(_make_line_src("decky-win1", "10.10.10.5", _TS))
engine.ingest(_make_line_src("decky-win2", "10.10.10.5", _TS2))
t = engine.traversals()
assert len(t) == 1
assert t[0].decky_count == 2
class TestEngineReporting:
def _two_decky_engine(self) -> CorrelationEngine:
engine = CorrelationEngine()
engine.ingest(_make_line("ssh", "decky-01", "conn", "3.3.3.3", _TS))
engine.ingest(_make_line("http", "decky-02", "req", "3.3.3.3", _TS2))
return engine
def test_report_json_structure(self):
engine = self._two_decky_engine()
report = engine.report_json()
assert "stats" in report
assert "traversals" in report
assert report["stats"]["traversals"] == 1
t = report["traversals"][0]
assert t["attacker_ip"] == "3.3.3.3"
assert t["decky_count"] == 2
def test_report_json_serialisable(self):
engine = self._two_decky_engine()
# Should not raise
json.dumps(engine.report_json())
def test_report_table_returns_rich_table(self):
from rich.table import Table
engine = self._two_decky_engine()
table = engine.report_table()
assert isinstance(table, Table)
def test_traversal_syslog_lines_count(self):
engine = self._two_decky_engine()
lines = engine.traversal_syslog_lines()
assert len(lines) == 1
def test_traversal_syslog_line_is_rfc5424(self):
engine = self._two_decky_engine()
line = engine.traversal_syslog_lines()[0]
# Must match RFC 5424 header
assert re.match(r"^<\d+>1 \S+ \S+ correlator - traversal_detected", line)
def test_traversal_syslog_contains_attacker_ip(self):
engine = self._two_decky_engine()
line = engine.traversal_syslog_lines()[0]
assert "3.3.3.3" in line
def test_traversal_syslog_severity_is_warning(self):
engine = self._two_decky_engine()
line = engine.traversal_syslog_lines()[0]
pri = int(re.match(r"^<(\d+)>", line).group(1))
assert pri == 16 * 8 + SEVERITY_WARNING # local0 + warning
def test_no_traversals_empty_json(self):
engine = CorrelationEngine()
engine.ingest(_make_line()) # single decky, no traversal
assert engine.report_json()["stats"]["traversals"] == 0
assert engine.traversal_syslog_lines() == []
# ---------------------------------------------------------------------------
# _fmt_duration helper
# ---------------------------------------------------------------------------
class TestFmtDuration:
def test_seconds(self):
assert _fmt_duration(45) == "45s"
def test_minutes(self):
assert _fmt_duration(90) == "1.5m"
def test_hours(self):
assert _fmt_duration(7200) == "2.0h"

View File

@@ -1,71 +0,0 @@
"""Tests for the syslog file handler."""
import logging
import os
from pathlib import Path
import pytest
import decnet.logging.file_handler as fh
@pytest.fixture(autouse=True)
def reset_handler(tmp_path, monkeypatch):
"""Reset the module-level logger between tests."""
monkeypatch.setattr(fh, "_handler", None)
monkeypatch.setattr(fh, "_logger", None)
monkeypatch.setenv(fh._LOG_FILE_ENV, str(tmp_path / "test.log"))
yield
# Remove handlers to avoid file lock issues on next test
if fh._logger is not None:
for h in list(fh._logger.handlers):
h.close()
fh._logger.removeHandler(h)
fh._handler = None
fh._logger = None
def test_write_creates_log_file(tmp_path):
log_path = tmp_path / "decnet.log"
os.environ[fh._LOG_FILE_ENV] = str(log_path)
fh.write_syslog("<134>1 2026-04-04T12:00:00+00:00 h svc - e - test message")
assert log_path.exists()
assert "test message" in log_path.read_text()
def test_write_appends_multiple_lines(tmp_path):
log_path = tmp_path / "decnet.log"
os.environ[fh._LOG_FILE_ENV] = str(log_path)
for i in range(3):
fh.write_syslog(f"<134>1 ts host svc - event{i} -")
lines = log_path.read_text().splitlines()
assert len(lines) == 3
assert "event0" in lines[0]
assert "event2" in lines[2]
def test_get_log_path_default(monkeypatch):
monkeypatch.delenv(fh._LOG_FILE_ENV, raising=False)
assert fh.get_log_path() == Path(fh._DEFAULT_LOG_FILE)
def test_get_log_path_custom(monkeypatch, tmp_path):
custom = str(tmp_path / "custom.log")
monkeypatch.setenv(fh._LOG_FILE_ENV, custom)
assert fh.get_log_path() == Path(custom)
def test_rotating_handler_configured(tmp_path):
log_path = tmp_path / "r.log"
os.environ[fh._LOG_FILE_ENV] = str(log_path)
logger = fh._get_logger()
handler = logger.handlers[0]
assert isinstance(handler, logging.handlers.RotatingFileHandler)
assert handler.maxBytes == fh._MAX_BYTES
assert handler.backupCount == fh._BACKUP_COUNT
def test_write_syslog_does_not_raise_on_bad_path(monkeypatch):
monkeypatch.setenv(fh._LOG_FILE_ENV, "/no/such/dir/that/exists/decnet.log")
# Should not raise — falls back to StreamHandler
fh.write_syslog("<134>1 ts h svc - e -")

View File

@@ -1,208 +0,0 @@
"""Tests for attacker fingerprint extraction in the ingester."""
import pytest
from unittest.mock import AsyncMock, MagicMock, call
from decnet.web.ingester import _extract_bounty
def _make_repo():
repo = MagicMock()
repo.add_bounty = AsyncMock()
return repo
# ---------------------------------------------------------------------------
# HTTP User-Agent
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_http_useragent_extracted():
repo = _make_repo()
log_data = {
"decky": "decky-01",
"service": "http",
"attacker_ip": "10.0.0.1",
"event_type": "request",
"fields": {
"method": "GET",
"path": "/admin",
"headers": {"User-Agent": "Nikto/2.1.6", "Host": "target"},
},
}
await _extract_bounty(repo, log_data)
repo.add_bounty.assert_awaited_once()
call_kwargs = repo.add_bounty.call_args[0][0]
assert call_kwargs["bounty_type"] == "fingerprint"
assert call_kwargs["payload"]["fingerprint_type"] == "http_useragent"
assert call_kwargs["payload"]["value"] == "Nikto/2.1.6"
assert call_kwargs["payload"]["path"] == "/admin"
assert call_kwargs["payload"]["method"] == "GET"
@pytest.mark.asyncio
async def test_http_useragent_lowercase_key():
repo = _make_repo()
log_data = {
"decky": "decky-01",
"service": "http",
"attacker_ip": "10.0.0.2",
"event_type": "request",
"fields": {
"headers": {"user-agent": "sqlmap/1.7"},
},
}
await _extract_bounty(repo, log_data)
call_kwargs = repo.add_bounty.call_args[0][0]
assert call_kwargs["payload"]["value"] == "sqlmap/1.7"
@pytest.mark.asyncio
async def test_http_no_useragent_no_fingerprint_bounty():
repo = _make_repo()
log_data = {
"decky": "decky-01",
"service": "http",
"attacker_ip": "10.0.0.3",
"event_type": "request",
"fields": {
"headers": {"Host": "target"},
},
}
await _extract_bounty(repo, log_data)
repo.add_bounty.assert_not_awaited()
@pytest.mark.asyncio
async def test_http_headers_not_dict_no_crash():
repo = _make_repo()
log_data = {
"decky": "decky-01",
"service": "http",
"attacker_ip": "10.0.0.4",
"event_type": "request",
"fields": {"headers": "raw-string-not-a-dict"},
}
await _extract_bounty(repo, log_data)
repo.add_bounty.assert_not_awaited()
# ---------------------------------------------------------------------------
# VNC client version
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_vnc_client_version_extracted():
repo = _make_repo()
log_data = {
"decky": "decky-02",
"service": "vnc",
"attacker_ip": "10.0.0.5",
"event_type": "version",
"fields": {"client_version": "RFB 003.008", "src": "10.0.0.5"},
}
await _extract_bounty(repo, log_data)
repo.add_bounty.assert_awaited_once()
call_kwargs = repo.add_bounty.call_args[0][0]
assert call_kwargs["bounty_type"] == "fingerprint"
assert call_kwargs["payload"]["fingerprint_type"] == "vnc_client_version"
assert call_kwargs["payload"]["value"] == "RFB 003.008"
@pytest.mark.asyncio
async def test_vnc_non_version_event_no_fingerprint():
repo = _make_repo()
log_data = {
"decky": "decky-02",
"service": "vnc",
"attacker_ip": "10.0.0.6",
"event_type": "auth_response",
"fields": {"client_version": "RFB 003.008", "src": "10.0.0.6"},
}
await _extract_bounty(repo, log_data)
repo.add_bounty.assert_not_awaited()
@pytest.mark.asyncio
async def test_vnc_version_event_no_client_version_field():
repo = _make_repo()
log_data = {
"decky": "decky-02",
"service": "vnc",
"attacker_ip": "10.0.0.7",
"event_type": "version",
"fields": {"src": "10.0.0.7"},
}
await _extract_bounty(repo, log_data)
repo.add_bounty.assert_not_awaited()
# ---------------------------------------------------------------------------
# Credential extraction unaffected
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_credential_still_extracted_alongside_fingerprint():
repo = _make_repo()
log_data = {
"decky": "decky-03",
"service": "ftp",
"attacker_ip": "10.0.0.8",
"event_type": "auth_attempt",
"fields": {"username": "admin", "password": "1234"},
}
await _extract_bounty(repo, log_data)
repo.add_bounty.assert_awaited_once()
call_kwargs = repo.add_bounty.call_args[0][0]
assert call_kwargs["bounty_type"] == "credential"
@pytest.mark.asyncio
async def test_http_credential_and_fingerprint_both_extracted():
"""An HTTP login attempt can yield both a credential and a UA fingerprint."""
repo = _make_repo()
log_data = {
"decky": "decky-03",
"service": "http",
"attacker_ip": "10.0.0.9",
"event_type": "request",
"fields": {
"username": "root",
"password": "toor",
"headers": {"User-Agent": "curl/7.88.1"},
},
}
await _extract_bounty(repo, log_data)
assert repo.add_bounty.await_count == 2
types = {c[0][0]["bounty_type"] for c in repo.add_bounty.call_args_list}
assert types == {"credential", "fingerprint"}
# ---------------------------------------------------------------------------
# Edge cases
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_fields_not_dict_no_crash():
repo = _make_repo()
log_data = {
"decky": "decky-04",
"service": "http",
"attacker_ip": "10.0.0.10",
"event_type": "request",
"fields": None,
}
await _extract_bounty(repo, log_data)
repo.add_bounty.assert_not_awaited()
@pytest.mark.asyncio
async def test_fields_missing_entirely_no_crash():
repo = _make_repo()
log_data = {
"decky": "decky-04",
"service": "http",
"attacker_ip": "10.0.0.11",
"event_type": "request",
}
await _extract_bounty(repo, log_data)
repo.add_bounty.assert_not_awaited()

View File

@@ -1,217 +0,0 @@
"""
Tests for the INI loader — subsection parsing, custom service definitions,
and per-service config propagation.
"""
import pytest
import textwrap
from pathlib import Path
from decnet.ini_loader import load_ini
def _write_ini(tmp_path: Path, content: str) -> Path:
f = tmp_path / "decnet.ini"
f.write_text(textwrap.dedent(content))
return f
# ---------------------------------------------------------------------------
# Basic decky parsing (regression)
# ---------------------------------------------------------------------------
def test_basic_decky_parsed(tmp_path):
ini_file = _write_ini(tmp_path, """
[general]
net = 192.168.1.0/24
gw = 192.168.1.1
[decky-01]
ip = 192.168.1.101
services = ssh, http
""")
cfg = load_ini(ini_file)
assert len(cfg.deckies) == 1
assert cfg.deckies[0].name == "decky-01"
assert cfg.deckies[0].services == ["ssh", "http"]
assert cfg.deckies[0].service_config == {}
# ---------------------------------------------------------------------------
# Per-service subsection parsing
# ---------------------------------------------------------------------------
def test_subsection_parsed_into_service_config(tmp_path):
ini_file = _write_ini(tmp_path, """
[decky-01]
ip = 192.168.1.101
services = ssh
[decky-01.ssh]
kernel_version = 5.15.0-76-generic
hardware_platform = x86_64
""")
cfg = load_ini(ini_file)
svc_cfg = cfg.deckies[0].service_config
assert "ssh" in svc_cfg
assert svc_cfg["ssh"]["kernel_version"] == "5.15.0-76-generic"
assert svc_cfg["ssh"]["hardware_platform"] == "x86_64"
def test_multiple_subsections_for_same_decky(tmp_path):
ini_file = _write_ini(tmp_path, """
[decky-01]
services = ssh, http
[decky-01.ssh]
users = root:toor
[decky-01.http]
server_header = nginx/1.18.0
fake_app = wordpress
""")
cfg = load_ini(ini_file)
svc_cfg = cfg.deckies[0].service_config
assert svc_cfg["ssh"]["users"] == "root:toor"
assert svc_cfg["http"]["server_header"] == "nginx/1.18.0"
assert svc_cfg["http"]["fake_app"] == "wordpress"
def test_subsection_for_unknown_decky_is_ignored(tmp_path):
ini_file = _write_ini(tmp_path, """
[decky-01]
services = ssh
[ghost.ssh]
kernel_version = 5.15.0
""")
cfg = load_ini(ini_file)
# ghost.ssh must not create a new decky or error out
assert len(cfg.deckies) == 1
assert cfg.deckies[0].name == "decky-01"
assert cfg.deckies[0].service_config == {}
def test_plain_decky_without_subsections_has_empty_service_config(tmp_path):
ini_file = _write_ini(tmp_path, """
[decky-01]
services = http
""")
cfg = load_ini(ini_file)
assert cfg.deckies[0].service_config == {}
# ---------------------------------------------------------------------------
# Bring-your-own service (BYOS) parsing
# ---------------------------------------------------------------------------
def test_custom_service_parsed(tmp_path):
ini_file = _write_ini(tmp_path, """
[general]
net = 10.0.0.0/24
gw = 10.0.0.1
[custom-myservice]
binary = my-image:latest
exec = /usr/bin/myapp -p 8080
ports = 8080
""")
cfg = load_ini(ini_file)
assert len(cfg.custom_services) == 1
cs = cfg.custom_services[0]
assert cs.name == "myservice"
assert cs.image == "my-image:latest"
assert cs.exec_cmd == "/usr/bin/myapp -p 8080"
assert cs.ports == [8080]
def test_custom_service_without_ports(tmp_path):
ini_file = _write_ini(tmp_path, """
[custom-scanner]
binary = scanner:1.0
exec = /usr/bin/scanner
""")
cfg = load_ini(ini_file)
assert cfg.custom_services[0].ports == []
def test_custom_service_not_added_to_deckies(tmp_path):
ini_file = _write_ini(tmp_path, """
[decky-01]
services = ssh
[custom-myservice]
binary = foo:bar
exec = /bin/foo
""")
cfg = load_ini(ini_file)
assert len(cfg.deckies) == 1
assert cfg.deckies[0].name == "decky-01"
assert len(cfg.custom_services) == 1
def test_no_custom_services_gives_empty_list(tmp_path):
ini_file = _write_ini(tmp_path, """
[decky-01]
services = http
""")
cfg = load_ini(ini_file)
assert cfg.custom_services == []
# ---------------------------------------------------------------------------
# nmap_os parsing
# ---------------------------------------------------------------------------
def test_nmap_os_parsed_from_ini(tmp_path):
ini_file = _write_ini(tmp_path, """
[decky-win]
ip = 192.168.1.101
services = rdp, smb
nmap_os = windows
""")
cfg = load_ini(ini_file)
assert cfg.deckies[0].nmap_os == "windows"
def test_nmap_os_defaults_to_none_when_absent(tmp_path):
ini_file = _write_ini(tmp_path, """
[decky-01]
services = ssh
""")
cfg = load_ini(ini_file)
assert cfg.deckies[0].nmap_os is None
@pytest.mark.parametrize("os_family", ["linux", "windows", "bsd", "embedded", "cisco"])
def test_nmap_os_all_families_accepted(tmp_path, os_family):
ini_file = _write_ini(tmp_path, f"""
[decky-01]
services = ssh
nmap_os = {os_family}
""")
cfg = load_ini(ini_file)
assert cfg.deckies[0].nmap_os == os_family
def test_nmap_os_propagates_to_amount_expanded_deckies(tmp_path):
ini_file = _write_ini(tmp_path, """
[corp-printers]
services = snmp
nmap_os = embedded
amount = 3
""")
cfg = load_ini(ini_file)
assert len(cfg.deckies) == 3
for d in cfg.deckies:
assert d.nmap_os == "embedded"
def test_nmap_os_hyphen_alias_accepted(tmp_path):
"""nmap-os= (hyphen) should work as an alias for nmap_os=."""
ini_file = _write_ini(tmp_path, """
[decky-01]
services = ssh
nmap-os = bsd
""")
cfg = load_ini(ini_file)
assert cfg.deckies[0].nmap_os == "bsd"

View File

@@ -1,155 +0,0 @@
"""
Tests for DECNET application logging system.
Covers:
- get_logger() factory and _ComponentFilter injection
- Rfc5424Formatter component-aware APP-NAME field
- Log level gating via DECNET_DEVELOPER
- Component tags for all five microservice layers
"""
from __future__ import annotations
import logging
import os
import re
import pytest
from decnet.logging import _ComponentFilter, get_logger
# RFC 5424 parser: <PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID SD MSG
_RFC5424_RE = re.compile(
r"^<(\d+)>1 " # PRI
r"\S+ " # TIMESTAMP
r"\S+ " # HOSTNAME
r"(\S+) " # APP-NAME ← what we care about
r"\S+ " # PROCID
r"(\S+) " # MSGID
r"(.+)$", # SD + MSG
)
def _format_record(logger: logging.Logger, level: int, msg: str) -> str:
"""Emit a log record through the root handler and return the formatted string."""
from decnet.config import Rfc5424Formatter
formatter = Rfc5424Formatter()
record = logger.makeRecord(
logger.name, level, "<test>", 0, msg, (), None
)
# Run all filters attached to the logger so decnet_component gets injected
for f in logger.filters:
f.filter(record)
return formatter.format(record)
class TestGetLogger:
def test_returns_logger(self):
log = get_logger("cli")
assert isinstance(log, logging.Logger)
def test_logger_name(self):
log = get_logger("engine")
assert log.name == "decnet.engine"
def test_filter_attached(self):
log = get_logger("api")
assert any(isinstance(f, _ComponentFilter) for f in log.filters)
def test_idempotent_filter(self):
log = get_logger("mutator")
get_logger("mutator") # second call
component_filters = [f for f in log.filters if isinstance(f, _ComponentFilter)]
assert len(component_filters) == 1
@pytest.mark.parametrize("component", ["cli", "engine", "api", "mutator", "collector"])
def test_all_components_registered(self, component):
log = get_logger(component)
assert any(isinstance(f, _ComponentFilter) for f in log.filters)
class TestComponentFilter:
def test_injects_attribute(self):
f = _ComponentFilter("engine")
record = logging.LogRecord("test", logging.INFO, "", 0, "msg", (), None)
assert f.filter(record) is True
assert record.decnet_component == "engine" # type: ignore[attr-defined]
def test_always_passes(self):
f = _ComponentFilter("collector")
record = logging.LogRecord("test", logging.DEBUG, "", 0, "msg", (), None)
assert f.filter(record) is True
class TestRfc5424FormatterComponentAware:
@pytest.mark.parametrize("component", ["cli", "engine", "api", "mutator", "collector"])
def test_app_name_is_component(self, component):
log = get_logger(component)
line = _format_record(log, logging.INFO, "test message")
m = _RFC5424_RE.match(line)
assert m is not None, f"Not RFC 5424: {line!r}"
assert m.group(2) == component, f"Expected APP-NAME={component!r}, got {m.group(2)!r}"
def test_fallback_app_name_without_component(self):
"""Untagged loggers (no _ComponentFilter) fall back to 'decnet'."""
from decnet.config import Rfc5424Formatter
formatter = Rfc5424Formatter()
record = logging.LogRecord("some.module", logging.INFO, "", 0, "hello", (), None)
line = formatter.format(record)
m = _RFC5424_RE.match(line)
assert m is not None
assert m.group(2) == "decnet"
def test_msgid_is_logger_name(self):
log = get_logger("engine")
line = _format_record(log, logging.INFO, "deploying")
m = _RFC5424_RE.match(line)
assert m is not None
assert m.group(3) == "decnet.engine"
class TestLogLevelGating:
def test_configure_logging_normal_mode_sets_info(self):
"""_configure_logging(dev=False) must set root to INFO."""
from decnet.config import _configure_logging, Rfc5424Formatter
root = logging.getLogger()
original_level = root.level
original_handlers = root.handlers[:]
# Remove any existing RFC5424 handlers so idempotency check doesn't skip
root.handlers = [
h for h in root.handlers
if not (isinstance(h, logging.StreamHandler) and isinstance(h.formatter, Rfc5424Formatter))
]
try:
_configure_logging(dev=False)
assert root.level == logging.INFO
finally:
root.setLevel(original_level)
root.handlers = original_handlers
def test_configure_logging_dev_mode_sets_debug(self):
"""_configure_logging(dev=True) must set root to DEBUG."""
from decnet.config import _configure_logging, Rfc5424Formatter
root = logging.getLogger()
original_level = root.level
original_handlers = root.handlers[:]
root.handlers = [
h for h in root.handlers
if not (isinstance(h, logging.StreamHandler) and isinstance(h.formatter, Rfc5424Formatter))
]
try:
_configure_logging(dev=True)
assert root.level == logging.DEBUG
finally:
root.setLevel(original_level)
root.handlers = original_handlers
def test_debug_enabled_in_developer_mode(self, monkeypatch):
"""Programmatically setting DEBUG on root allows debug records through."""
root = logging.getLogger()
original_level = root.level
root.setLevel(logging.DEBUG)
try:
assert root.isEnabledFor(logging.DEBUG)
finally:
root.setLevel(original_level)

View File

@@ -1,134 +0,0 @@
"""Tests for RFC 5424 syslog formatter."""
import re
from datetime import datetime, timezone
from decnet.logging.syslog_formatter import (
SEVERITY_ERROR,
SEVERITY_INFO,
SEVERITY_WARNING,
format_rfc5424,
)
# RFC 5424 header regex: <PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID SD [MSG]
_RFC5424_RE = re.compile(
r"^<(\d+)>1 " # PRI + version
r"(\S+) " # TIMESTAMP
r"(\S+) " # HOSTNAME
r"(\S+) " # APP-NAME
r"- " # PROCID (NILVALUE)
r"(\S+) " # MSGID
r"(.+)$", # SD + optional MSG
)
def _parse(line: str) -> re.Match:
m = _RFC5424_RE.match(line)
assert m is not None, f"Not RFC 5424: {line!r}"
return m
class TestPRI:
def test_info_pri(self):
line = format_rfc5424("http", "host1", "request", SEVERITY_INFO)
m = _parse(line)
pri = int(m.group(1))
assert pri == 16 * 8 + 6 # local0 + info = 134
def test_warning_pri(self):
line = format_rfc5424("http", "host1", "warn", SEVERITY_WARNING)
pri = int(_parse(line).group(1))
assert pri == 16 * 8 + 4 # 132
def test_error_pri(self):
line = format_rfc5424("http", "host1", "err", SEVERITY_ERROR)
pri = int(_parse(line).group(1))
assert pri == 16 * 8 + 3 # 131
def test_pri_range(self):
for sev in range(8):
line = format_rfc5424("svc", "h", "e", sev)
pri = int(_parse(line).group(1))
assert 0 <= pri <= 191
class TestTimestamp:
def test_utc_timestamp(self):
ts_str = datetime(2026, 4, 4, 12, 0, 0, tzinfo=timezone.utc).isoformat()
line = format_rfc5424("svc", "h", "e", timestamp=datetime(2026, 4, 4, 12, 0, 0, tzinfo=timezone.utc))
m = _parse(line)
assert m.group(2) == ts_str
def test_default_timestamp_is_utc(self):
line = format_rfc5424("svc", "h", "e")
ts_field = _parse(line).group(2)
# Should end with +00:00 or Z
assert "+" in ts_field or ts_field.endswith("Z")
class TestHeader:
def test_hostname(self):
line = format_rfc5424("http", "decky-01", "request")
assert _parse(line).group(3) == "decky-01"
def test_appname(self):
line = format_rfc5424("mysql", "host", "login_attempt")
assert _parse(line).group(4) == "mysql"
def test_msgid(self):
line = format_rfc5424("ftp", "host", "login_attempt")
assert _parse(line).group(5) == "login_attempt"
def test_procid_is_nilvalue(self):
line = format_rfc5424("svc", "h", "e")
assert " - " in line # PROCID is always NILVALUE
def test_appname_truncated(self):
long_name = "a" * 100
line = format_rfc5424(long_name, "h", "e")
appname = _parse(line).group(4)
assert len(appname) <= 48
def test_msgid_truncated(self):
long_msgid = "x" * 100
line = format_rfc5424("svc", "h", long_msgid)
msgid = _parse(line).group(5)
assert len(msgid) <= 32
class TestStructuredData:
def test_nilvalue_when_no_fields(self):
line = format_rfc5424("svc", "h", "e")
sd_and_msg = _parse(line).group(6)
assert sd_and_msg.startswith("-")
def test_sd_element_present(self):
line = format_rfc5424("http", "h", "request", remote_addr="1.2.3.4", method="GET")
sd_and_msg = _parse(line).group(6)
assert sd_and_msg.startswith("[decnet@55555 ")
assert 'remote_addr="1.2.3.4"' in sd_and_msg
assert 'method="GET"' in sd_and_msg
def test_sd_escape_double_quote(self):
line = format_rfc5424("svc", "h", "e", ua='foo"bar')
assert r'ua="foo\"bar"' in line
def test_sd_escape_backslash(self):
line = format_rfc5424("svc", "h", "e", path="a\\b")
assert r'path="a\\b"' in line
def test_sd_escape_close_bracket(self):
line = format_rfc5424("svc", "h", "e", val="a]b")
assert r'val="a\]b"' in line
class TestMsg:
def test_optional_msg_appended(self):
line = format_rfc5424("svc", "h", "e", msg="hello world")
assert line.endswith(" hello world")
def test_no_msg_no_trailing_space_in_sd(self):
line = format_rfc5424("svc", "h", "e", key="val")
# SD element closes with ]
assert line.rstrip().endswith("]")