Compare commits

...

84 Commits

Author SHA1 Message Date
c3c1cd2fa6 modified: .gitignore
Some checks failed
CI / Lint (ruff) (push) Failing after 16s
CI / Test (pytest) (3.11) (push) Failing after 47s
CI / Test (pytest) (3.12) (push) Failing after 49s
CI / SAST (bandit) (push) Successful in 12s
CI / Dependency audit (pip-audit) (push) Successful in 23s
CI / Open PR to main (push) Has been skipped
2026-04-12 02:03:49 -04:00
68b13b8a59 added: decnet_logging.py stub for telnet monitoring 2026-04-12 02:03:06 -04:00
f8bb134d70 added: fixed mssql service 2026-04-12 02:01:45 -04:00
20fba18711 fix(telnet): disable imklog in rsyslog — containers cannot access /proc/kmsg 2026-04-12 01:45:46 -04:00
b325fc8c5f fix(logging): silence Twisted internal logs and Werkzeug startup banner from stdout 2026-04-12 01:43:42 -04:00
1484d2f625 fix(telnet): use busybox-static for telnetd applet, rm stale fifo on restart 2026-04-12 01:39:31 -04:00
f8ae9ce2a6 refactor(deps): move live test deps to pyproject.toml optional-dependencies[live] 2026-04-12 01:35:16 -04:00
662a5e43e8 feat(tests): add live subprocess integration test suite for services
Spins up each service's server.py in a real subprocess via a free ephemeral
port (PORT env var), connects with real protocol clients, and asserts both
correct protocol behavior and RFC 5424 log output.

- 44 live tests across 10 services: http, ftp, smtp, redis, mqtt,
  mysql, postgres, mongodb, pop3, imap
- Shared conftest.py: _ServiceProcess (bg reader thread + queue),
  free_port, live_service fixture, assert_rfc5424 helper
- PORT env var added to all 10 targeted server.py templates
- New pytest marker `live`; excluded from default addopts run
- requirements-live-tests.txt: flask, twisted + protocol clients
2026-04-12 01:34:16 -04:00
d63e396410 fix(protocols): guard against zero/malformed length fields in binary protocol parsers
MongoDB had the same infinite-loop bug as MSSQL (msg_len=0 → buffer never
shrinks in while loop). Postgres, MySQL, and MQTT had related length-field
issues (stuck state, resource exhaustion, overlong remaining-length).

Also fixes an existing MongoDB _op_reply struct.pack format bug (extra 'q'
specifier caused struct.error on any OP_QUERY response).

Adds 53 regression + protocol boundary tests across MSSQL, MongoDB,
Postgres, MySQL, and MQTT, including a _run_with_timeout threading harness
to catch infinite loops and @pytest.mark.fuzz hypothesis tests for each.
2026-04-12 01:01:13 -04:00
65d585569b fix(telnet): replace Cowrie with real busybox telnetd + rsyslog logging
Cowrie was exposing an SSH daemon on port 22 alongside the telnet service
even when COWRIE_SSH_ENABLED=false, contaminating deployments that did not
request an SSH service.

New implementation mirrors the SSH service pattern:
- busybox telnetd in foreground mode on port 23
- /bin/login for real PAM authentication (brute-force attempts logged)
- rsyslog RFC 5424 bridge piped to stdout for Docker log capture
- Configurable root password and hostname via env vars
- No Cowrie dependency
2026-04-12 00:34:45 -04:00
c384a3103a refactor: separate engine, collector, mutator, and fleet into independent subpackages
- decnet/engine/ — container lifecycle (deploy, teardown, status); _kill_api removed
- decnet/collector/ — Docker log streaming (moved from web/collector.py)
- decnet/mutator/ — mutation engine (no longer imports from cli or duplicates deployer code)
- decnet/fleet.py — shared decky-building logic extracted from cli.py

Cross-contamination eliminated:
- web router no longer imports from decnet.cli
- mutator no longer imports from decnet.cli
- cli no longer imports from decnet.web
- _kill_api() moved to cli (process management, not engine concern)
- _compose_with_retry duplicate removed from mutator
2026-04-12 00:26:22 -04:00
c79f96f321 refactor(ssh): consolidate real_ssh into ssh, remove duplication
real_ssh was a separate service name pointing to the same template and
behaviour as ssh. Merged them: ssh is now the single real-OpenSSH service.

- Rename templates/real_ssh/ → templates/ssh/
- Remove decnet/services/real_ssh.py
- Deaddeck archetype updated: services=["ssh"]
- Merge test_real_ssh.py into test_ssh.py (includes deaddeck + logging tests)
- Drop decnet.services.real_ssh from test_build module list
2026-04-11 19:51:41 -04:00
d77def64c4 fix(cli): import Path locally in deploy to fix NameError 2026-04-11 19:46:58 -04:00
ce182652ad fix(cli): add __main__ guard so python -m decnet.cli actually runs the app
The collector subprocess was spawned via 'python3 -m decnet.cli collect'
but cli.py had no 'if __name__ == __main__: app()' guard. Python executed
the module, defined all functions, then exited cleanly with code 0 without
ever calling the collect command. No output, no log file, exit 0 — silent
non-start every time.

Also route collector stderr to <log_file>.collector.log so future crashes
are visible instead of disappearing into DEVNULL.
2026-04-11 19:42:10 -04:00
a6063efbb9 fix(collector): daemonize background subprocesses with start_new_session
Collector and mutator watcher subprocesses were spawned without
start_new_session=True, leaving them in the parent's process group.
SIGHUP (sent when the controlling terminal closes) killed both
processes silently — stdout/stderr were DEVNULL so the crash was
invisible.

Also update test_services and test_composer to reflect the ssh plugin
no longer using Cowrie env vars (replaced with SSH_ROOT_PASSWORD /
SSH_HOSTNAME matching the real_ssh plugin).
2026-04-11 19:36:46 -04:00
d4ac53c0c9 feat(ssh): replace Cowrie with real OpenSSH + rsyslog logging pipeline
Scraps the Cowrie emulation layer. The real_ssh template now runs a
genuine sshd backed by a three-layer logging stack forwarded to stdout
as RFC 5424 for the DECNET collector:

  auth,authpriv.*  → rsyslogd → named pipe → stdout  (logins/failures)
  user.*           → rsyslogd → named pipe → stdout  (PROMPT_COMMAND cmds)
  sudo syslog=auth → rsyslogd → named pipe → stdout  (privilege escalation)
  sudo logfile     → /var/log/sudo.log               (local backup with I/O)

The ssh.py service plugin now points to templates/real_ssh and drops all
COWRIE_* / NODE_NAME env vars, sharing the same compose fragment shape as
real_ssh.py.
2026-04-11 19:12:54 -04:00
9ca3b4691d docs(roadmap): tick completed service implementations 2026-04-11 04:02:50 -04:00
babad5ce65 refactor(collector): use state file for container detection, drop label heuristics
_load_service_container_names() reads decnet-state.json and builds the
exact set of expected container names ({decky}-{service}). is_service_container()
and is_service_event() do a direct set lookup — no regex, no label
inspection, no heuristics.
2026-04-11 03:58:52 -04:00
7abae5571a fix(collector): fix container detection and auto-start on deploy
Two bugs caused the log file to never be written:

1. is_service_container() used regex '^decky-\d+-\w' which only matched
   the old decky-01-smtp naming style. Actual containers are named
   omega-decky-smtp, relay-decky-smtp, etc. Fixed by using Docker Compose
   labels instead: com.docker.compose.project=decnet + non-empty
   depends_on discriminates service containers from base (sleep infinity)
   containers reliably regardless of decky naming convention.
   Added is_service_event() for the Docker events path.

2. The collector was only started when --api was used. Added a 'collect'
   CLI subcommand (decnet collect --log-file <path>) and wired it into
   deploy as an auto-started background process when --api is not in use.
   Default log path: /var/log/decnet/decnet.log
2026-04-11 03:56:53 -04:00
377ba0410c feat(deploy): add --parallel flag for concurrent image builds
When --parallel is set:
- DOCKER_BUILDKIT=1 is injected into the subprocess environment to
  ensure BuildKit is active regardless of host daemon config
- docker compose build runs first (all images built concurrently)
- docker compose up -d follows without --build (no redundant checks)

Without --parallel the original up --build path is preserved.
--parallel and --no-cache compose correctly (build --no-cache).
2026-04-11 03:46:52 -04:00
5ef48d60be fix(conpot): add syslog bridge entrypoint for logging pipeline
Conpot is a third-party app with its own Python logger — it never calls
decnet_logging. Added entrypoint.py as a subprocess wrapper that:
- Launches conpot and captures its stdout/stderr
- Classifies each line (startup/request/warning/error/log)
- Extracts source IPs via regex
- Emits RFC 5424 syslog lines to stdout for Docker/collector pickup

Entrypoint is self-contained (no import of shared decnet_logging.py)
because the conpot base image runs Python 3.6, which cannot parse the
dict[str, Any] / str | None type syntax used in the canonical file.
2026-04-11 03:44:41 -04:00
fe46b8fc0b fix(conpot): use honeynet/conpot:latest base, run as conpot user
The BASE_IMAGE build arg was being unconditionally overwritten by
composer.py with the decky's distro build_base (debian:bookworm-slim),
turning the conpot container into a bare Debian image with no conpot
installation — hence the silent restart loop.

Two fixes:
1. composer.py: use args.setdefault() so services that pre-declare
   BASE_IMAGE in their compose_fragment() win over the distro default.
2. conpot.py: pre-declare BASE_IMAGE=honeynet/conpot:latest in build
   args so it always uses the upstream image regardless of decky distro.

Also removed the USER decnet switch from the conpot Dockerfile. The
upstream image already runs as the non-root 'conpot' user; switching to
'decnet' broke pkg_resources because conpot's eggs live under
/home/conpot/.local and are only on sys.path for that user.
2026-04-11 03:32:11 -04:00
c7713c6228 feat(imap,pop3): full IMAP4rev1 + POP3 bait mailbox implementation
IMAP: extended to full IMAP4rev1 — 10 bait emails (AWS keys, DB creds,
tokens, VPN config, root pw etc.), LIST/LSUB/STATUS/FETCH/UID FETCH/
SEARCH/CLOSE/NOOP, proper SELECT untagged responses (EXISTS, UIDNEXT,
FLAGS, PERMANENTFLAGS), CAPABILITY with IDLE/LITERAL+/AUTH=PLAIN.
FETCH correctly handles sequence sets (1:*, 1:3, *), item dispatch
(FLAGS, ENVELOPE, BODY[], RFC822, RFC822.SIZE), and places body literals
last per RFC 3501.

POP3: extended with same 10 bait emails, fixed banner env var key
(POP3_BANNER not IMAP_BANNER), CAPA fully populated (TOP/UIDL/USER/
RESP-CODES/SASL), TOP (headers + N body lines), UIDL (msg-N format),
DELE/RSET with _deleted set tracking, NOOP. _active_messages() helper
excludes DELE'd messages from STAT/LIST/UIDL.

Both: DEBT-026 stub added (_EMAIL_SEED_PATH env var, documented in
DEBT.md for next-session JSON seed file wiring).

Tests: test_imap.py expanded to 27 cases, test_pop3.py to 22 cases —
860 total tests passing.
2026-04-11 03:12:32 -04:00
1196363d0b feat(os_fingerprint): Phase 2 — add icmp_ratelimit + icmp_ratemask sysctls
Windows: both 0 (no ICMP rate limiting — matches real Windows behavior)
Linux: 1000ms / mask 6168 (kernel defaults)
BSD: 250ms / mask 6168 (FreeBSD default is faster than Linux)
Embedded/Cisco: both 0 (most firmware doesn't rate-limit ICMP)

These affect nmap's IE and U1 probe groups which measure ICMP error
response timing to closed UDP ports. Windows responds to all probes
instantly while Linux throttles to ~1/sec.

Tests: 10 new cases (5 per sysctl). Suite: 822 passed.
2026-04-10 16:41:23 -04:00
62a67f3d1d docs(HARDENING): rewrite roadmap based on live scan findings
Phase 1 is complete. Live testing revealed:
- Window size (64240) is already correct — Phase 2 window mangling unnecessary
- TI=Z (IP ID = 0) is the single remaining blocker for Windows spoofing
- ip_no_pmtu_disc does NOT fix TI=Z (tested and confirmed)

Revised phase plan:
- Phase 2: ICMP tuning (icmp_ratelimit + icmp_ratemask sysctls)
- Phase 3: NFQUEUE daemon for IP ID rewriting (fixes TI=Z)
- Phase 4: diminishing returns, not recommended

Added detailed NFQUEUE architecture, TCPOPTSTRIP notes, and
note clarifying P= field in nmap output.
2026-04-10 16:38:27 -04:00
6df2c9ccbf revert(os_fingerprint): undo ip_no_pmtu_disc=1 for windows — was incorrect
ip_no_pmtu_disc controls PMTU discovery for UDP/ICMP paths only.
TI=Z originates from ip_select_ident() in the kernel TCP stack setting
IP ID=0 for DF=1 TCP packets — a namespace-scoped sysctl cannot change this.
The previous commit was based on incorrect root-cause analysis.
2026-04-10 16:29:44 -04:00
b1f6c3b84a fix(os_fingerprint): set ip_no_pmtu_disc=1 for windows to eliminate TI=Z
When ip_no_pmtu_disc=0 the Linux kernel sets DF=1 on TCP packets and uses
IP ID=0 (RFC 6864). nmap's TI=Z fingerprint has no Windows match in its DB,
causing 91% confidence guesses of 'Linux 2.4/2.6 embedded' regardless of
TTL being 128. Setting ip_no_pmtu_disc=1 allows non-zero IP ID generation.

Trade-off: DF bit is not set on outgoing packets (slightly wrong for Windows)
but TI=Z is far more damaging to the spoof than losing DF accuracy.
2026-04-10 16:19:32 -04:00
5fdfe67f2f fix(cowrie): add missing COPY+chmod for entrypoint.sh in Dockerfile
The entrypoint.sh was present in the build context but never COPYed into
the image, causing 'stat /entrypoint.sh: no such file or directory' at
container start. Added COPY+chmod before the USER decnet instruction so
the script is installed as root and is executable by all users.
2026-04-10 16:15:05 -04:00
4fac9570ec chore: add arche-test.ini OS fingerprint smoke-test fleet 2026-04-10 16:11:18 -04:00
5e83c9e48d feat(os_fingerprint): Phase 1 — extend OS sysctls with 6 new fingerprint knobs
Add tcp_timestamps, tcp_window_scaling, tcp_sack, tcp_ecn, ip_no_pmtu_disc,
and tcp_fin_timeout to every OS profile in OS_SYSCTLS.

All 6 are network-namespace-scoped and safe to set per-container without
--privileged. They directly influence nmap's OPS, WIN, ECN, and T2-T6
probe groups, making OS family detection significantly more convincing.

Key changes:
- tcp_timestamps=0 for windows/embedded/cisco (strongest Windows discriminator)
- tcp_ecn=2 for linux (ECN offer), 0 for all others
- tcp_sack=0 / tcp_window_scaling=0 for embedded/cisco
- ip_no_pmtu_disc=1 for embedded/cisco (DF bit ICMP behaviour)
- Expose _REQUIRED_SYSCTLS frozenset for completeness assertions

Tests: 88 new test cases across all OS families and composer integration.
Total suite: 812 passed.
2026-04-10 16:06:36 -04:00
d8457c57f3 docs: add OS fingerprint spoofing hardening roadmap 2026-04-10 16:02:00 -04:00
38d37f862b docs: Detail attachable Swarm overlay backend in FUTURE.md 2026-04-10 03:00:03 -04:00
fa8b0f3cb5 docs: Add latency simulation to FUTURE.md 2026-04-10 02:53:00 -04:00
db425df6f2 docs: Add FUTURE.md to capture long-term architectural visions 2026-04-10 02:48:28 -04:00
73e68388c0 fix(conpot): Refactor permissions to use dedicated decnet user via chown 2026-04-10 02:27:02 -04:00
682322d564 fix(conpot): Resolve silent crash by running as nobody and ensuring permissions 2026-04-10 02:25:45 -04:00
33885a2eec fix(conpot): Keep container as root to allow port 502 binding and fix user not found error 2026-04-10 02:20:46 -04:00
f583b3d699 fix(services): Resolve protocol realism gaps and update technical debt register
- Add dynamic challenge nonces to Postgres, VNC, and SIP.
- Add basic keyspace lookup and mock data to Redis.
- Correct MSSQL TDS pre-login offset bounds.
- Support MongoDB OP_MSG handshake version checking.
- Suppress Werkzeug HTTP server headers and normalize FTPAnonymousShell response.
- Add tracking for Dynamic Bait Store (DEBT-027) via DEBT.md.
2026-04-10 02:16:42 -04:00
5cb6666d7b docs: Append bug ledger implementation plan to REALISM_AUDIT.md 2026-04-10 01:58:23 -04:00
25b6425496 Update REALISM_AUDIT.md with completed tasks 2026-04-10 01:55:14 -04:00
08242a4d84 Implement ICS/SCADA and IMAP Bait features 2026-04-10 01:50:08 -04:00
63fb477e1f feat: add smtp_relay service; add service_testing/ init
- decnet/services/smtp_relay.py: open relay variant of smtp, same template
  with SMTP_OPEN_RELAY=1 baked into the environment
- tests/service_testing/__init__.py: init so pytest discovers the subdirectory
2026-04-10 01:09:15 -04:00
94f82c9089 feat(smtp): fix DATA state machine; add SMTP_OPEN_RELAY mode
- Buffer DATA body until CRLF.CRLF terminator — fixes 502-on-every-body-line bug
- SMTP_OPEN_RELAY=1: AUTH accepted (235), RCPT TO accepted for any domain,
  full DATA pipeline with queued-as message ID
- Default (SMTP_OPEN_RELAY=0): credential harvester — AUTH rejected (535)
  but connection stays open, RCPT TO returns 554 relay denied
- SASL PLAIN and LOGIN multi-step AUTH both decoded and logged
- RSET clears all per-transaction state
- Add development/SMTP_RELAY.md, IMAP_BAIT.md, ICS_SCADA.md, BUG_FIXES.md
  (live-tested service realism plans)
2026-04-10 01:03:47 -04:00
40cd582253 fix: restore forward_syslog as no-op stub; all service server.py files import it 2026-04-10 00:43:50 -04:00
24f02c3466 fix: resolve all bandit SAST findings in templates/
- Add # nosec B104 to all intentional 0.0.0.0 binds in honeypot servers
  (hardcoded_bind_all_interfaces is by design — deckies must accept attacker connections)
- Add # nosec B101 to assert statements used for protocol validation in ldap/snmp
- Add # nosec B105 to fake SASL placeholder in ldap
- Add # nosec B108 to /tmp usage in smb template
- Exclude root-owned auto-generated decnet_logging.py copies from bandit scan
  via pyproject.toml [tool.bandit] config (synced by _sync_logging_helper at deploy)
2026-04-10 00:24:40 -04:00
25ba3fb56a feat: replace bind-mount log pipeline with Docker log streaming
Services now print RFC 5424 to stdout; Docker captures via json-file driver.
A new host-side collector (decnet.web.collector) streams docker logs from all
running decky service containers and writes RFC 5424 + parsed JSON to the host
log file. The existing ingester continues to tail the .json file unchanged.
rsyslog can consume the .log file independently — no DECNET involvement needed.

Removes: bind-mount volume injection, _LOG_NETWORK bridge, log_target config
field and --log-target CLI flag, TCP syslog forwarding from service templates.
2026-04-10 00:14:14 -04:00
8d023147cc fix: chmod 777 log dir on compose generation so container decnet user can write logs 2026-04-09 19:36:53 -04:00
14f7a535db fix: use model_dump(mode='json') to serialize datetime fields; fixes SSE stream silently dying post-ORM migration 2026-04-09 19:29:27 -04:00
cea6279a08 fix: add Last-Event-ID to CORS allow_headers to unblock SSE reconnects 2026-04-09 19:26:24 -04:00
6b8392102e fix: emit stats/histogram snapshot on SSE connect; remove polling api.get('/stats') from Dashboard 2026-04-09 19:23:24 -04:00
d2a569496d fix: add get_stream_user dependency for SSE endpoint; allow query-string token for EventSource 2026-04-09 19:20:38 -04:00
f20e86826d fix: derive default CORS origin from DECNET_WEB_HOST/PORT instead of hardcoded ports 2026-04-09 19:15:45 -04:00
29da2a75b3 fix: add localhost:9090 to CORS defaults; revert broken relative-URL and proxy changes 2026-04-09 19:14:40 -04:00
3362325479 fix: resolve CORS blocking Vite dev server (add 5173 to defaults, add proxy) 2026-04-09 19:10:10 -04:00
34a57d6f09 fix: make setcap resilient — no-op when Python absent or symlink-only 2026-04-09 19:04:52 -04:00
016115a523 fix: clear all addressable technical debt (DEBT-005 through DEBT-025)
Security:
- DEBT-008: remove query-string token auth; header-only Bearer now enforced
- DEBT-013: add regex constraint ^[a-z0-9\-]{1,64}$ on decky_name path param
- DEBT-015: stop leaking raw exception detail to API clients; log server-side
- DEBT-016: validate search (max_length=512) and datetime params with regex

Reliability:
- DEBT-014: wrap SSE event_generator in try/except; yield error frame on failure
- DEBT-017: emit log.warning/error on DB init retry; silent failures now visible

Observability / Docs:
- DEBT-020: add 401/422 response declarations to all route decorators

Infrastructure:
- DEBT-018: add HEALTHCHECK to all 24 template Dockerfiles
- DEBT-019: add USER decnet + setcap cap_net_bind_service to all 24 Dockerfiles
- DEBT-024: bump Redis template version 7.0.12 → 7.2.7

Config:
- DEBT-012: validate DECNET_API_PORT and DECNET_WEB_PORT range (1-65535)

Code quality:
- DEBT-010: delete 22 duplicate decnet_logging.py copies; deployer injects canonical
- DEBT-022: closed as false positive (print only in module docstring)
- DEBT-009: closed as false positive (templates already use structured syslog_line)

Build:
- DEBT-025: generate requirements.lock via pip freeze

Testing:
- DEBT-005/006/007: comprehensive test suite added across tests/api/
- conftest: in-memory SQLite + StaticPool + monkeypatched session_factory
- fuzz mark added; default run excludes fuzz; -n logical parallelism

DEBT.md updated: 23/25 items closed; DEBT-011 (Alembic) and DEBT-023 (digest pinning) remain
2026-04-09 19:02:51 -04:00
0166d0d559 fix: clean up db layer — model_dump, timezone-aware timestamps, unified histogram, async load_state 2026-04-09 18:46:35 -04:00
dbf6d13b95 fix: use :memory: + StaticPool for test DBs, eliminates file:testdb_* garbage 2026-04-09 18:39:36 -04:00
d15c106b44 test: fix async fixture isolation, add fuzz marks, parallelize with xdist
- Rebuild repo.engine and repo.session_factory per-test using unique
  in-memory SQLite URIs — fixes KeyError: 'access_token' caused by
  stale session_factory pointing at production DB
- Add @pytest.mark.fuzz to all Hypothesis and Schemathesis tests;
  default run excludes them (addopts = -m 'not fuzz')
- Add missing fuzz tests to bounty, fleet, histogram, and repository
- Use tmp_path for state file in patch_state_file/mock_state_file to
  eliminate file-path race conditions under xdist parallelism
- Set default addopts: -v -q -x -n logical (26 tests in ~7s)
2026-04-09 18:32:46 -04:00
6fc1a2a3ea test: refactor suite to use AsyncClient, in-memory DBs, and parallel coverage 2026-04-09 16:43:49 -04:00
de84cc664f refactor: migrate database to SQLModel and implement modular DB structure 2026-04-09 16:43:30 -04:00
1541b4b7e0 docs: close DEBT-002 as by-design 2026-04-09 13:25:40 -04:00
2b7d872ab7 fix: revert DECNET_ADMIN_PASSWORD to default 'admin'; first-login change enforces security 2026-04-09 13:25:29 -04:00
4ae6f4f23d test: expand coverage 64%→76%; add BUGS.md for Gemini migration issues 2026-04-09 12:55:52 -04:00
310c2a1fbe feat: add pytest-asyncio, freezegun, schemathesis, pytest-cov to test toolchain 2026-04-09 12:40:59 -04:00
44de453bb2 refactor: modularize API tests to match router structure 2026-04-09 12:32:31 -04:00
ec66e01f55 fix: add missing __init__.py to tests/api subpackages to fix relative imports 2026-04-09 12:24:09 -04:00
a22f996027 docs: mark DEBT-001–004 as resolved in DEBT.md 2026-04-09 12:14:16 -04:00
b6b046c90b fix: harden startup security — require strong secrets, restrict CORS
- decnet/env.py: DECNET_JWT_SECRET and DECNET_ADMIN_PASSWORD are now
  required env vars; startup raises ValueError if unset or set to a
  known-bad default ("admin", "password", etc.)
- decnet/env.py: add DECNET_CORS_ORIGINS (comma-separated, defaults to
  http://localhost:8080) replacing the previous allow_origins=["*"]
- decnet/web/api.py: use DECNET_CORS_ORIGINS and tighten allow_methods
  and allow_headers to explicit lists
- tests/conftest.py: set required env vars at module level so test
  collection works without real credentials
- tests/test_web_api.py, test_web_api_fuzz.py: use DECNET_ADMIN_PASSWORD
  from env instead of hardcoded "admin"

Closes DEBT-001, DEBT-002, DEBT-004
2026-04-09 12:13:22 -04:00
29a2cf2738 refactor: modularize API routes into separate files and clean up dependencies 2026-04-09 11:58:57 -04:00
551664bc43 fix: stabilize test suite by ensuring proper test DB isolation and initialization 2026-04-09 02:31:14 -04:00
a2d07bd67c fix: refactor Bounty UI to match dashboard style and fix layout 2026-04-09 02:00:49 -04:00
a3b92d4dd6 docs: tag API endpoints for better organization 2026-04-09 01:58:54 -04:00
30edf9a55d feat: add DECNET_DEVELOPER toggle for API documentation 2026-04-09 01:55:31 -04:00
69626d705d feat: implement Bounty Vault for captured credentials and artifacts 2026-04-09 01:52:50 -04:00
0f86f883fe fix: resolve remaining bandit warnings and stabilize lifespan 2026-04-09 01:35:08 -04:00
13f3d15a36 fix: stabilize tests with synchronous DB init and handle Bandit security findings 2026-04-09 01:33:15 -04:00
8c7ec2953e fix: handle bcrypt 72-byte limit and increase JWT secret length 2026-04-09 01:11:32 -04:00
0123e1c69e fix: suppress noisy cleanup warnings in pytest and fix fleet test auth 2026-04-09 01:05:34 -04:00
9dc6ff3887 ui: ensure inputs and buttons inherit Ubuntu Mono font 2026-04-08 21:31:44 -04:00
fe25798425 ui: change main dashboard font to Ubuntu Mono 2026-04-08 21:30:30 -04:00
6c2478ede3 fix: restore missing API endpoints, fix chart rendering, and update date filter formatting 2026-04-08 21:25:59 -04:00
532a4e2dc5 fix: resolve SSE CORS issues and fix date filter format mismatch 2026-04-08 21:15:26 -04:00
ec503b9ec6 feat: implement advanced live logs with KQL search, histogram, and live/historical modes 2026-04-08 21:01:05 -04:00
238 changed files with 14079 additions and 5656 deletions

View File

@@ -1,7 +1,27 @@
{ {
"permissions": { "permissions": {
"allow": [ "allow": [
"mcp__plugin_context-mode_context-mode__ctx_batch_execute" "mcp__plugin_context-mode_context-mode__ctx_batch_execute",
"mcp__plugin_context-mode_context-mode__ctx_search",
"Bash(grep:*)",
"Bash(python -m pytest --tb=short -q)",
"Bash(pip install:*)",
"Bash(pip show:*)",
"Bash(python:*)",
"Bash(DECNET_JWT_SECRET=\"test-secret-xyz-1234!\" DECNET_ADMIN_PASSWORD=\"test-pass-xyz-1234!\" python:*)",
"Bash(ls /home/anti/Tools/DECNET/*.db* /home/anti/Tools/DECNET/test_*.db*)",
"mcp__plugin_context-mode_context-mode__ctx_execute_file",
"Bash(nc)",
"Bash(nmap:*)",
"Bash(ping -c1 -W2 192.168.1.200)",
"Bash(xxd)",
"Bash(curl -s http://192.168.1.200:2375/version)",
"Bash(python3 -m json.tool)",
"Bash(curl -s http://192.168.1.200:9200/)",
"Bash(docker image:*)",
"Read(//home/anti/Tools/cowrie/src/cowrie/data/txtcmds/**)",
"Read(//home/anti/Tools/cowrie/src/cowrie/data/txtcmds/bin/**)",
"mcp__plugin_context-mode_context-mode__ctx_index"
] ]
} }
} }

View File

@@ -1,7 +1,7 @@
# API Options # API Options
DECNET_API_HOST=0.0.0.0 DECNET_API_HOST=0.0.0.0
DECNET_API_PORT=8000 DECNET_API_PORT=8000
DECNET_JWT_SECRET=supersecretkey12345 DECNET_JWT_SECRET=supersecretkey12345678901234567
DECNET_INGEST_LOG_FILE=/var/log/decnet/decnet.log DECNET_INGEST_LOG_FILE=/var/log/decnet/decnet.log
# Web Dashboard Options # Web Dashboard Options
@@ -9,3 +9,4 @@ DECNET_WEB_HOST=0.0.0.0
DECNET_WEB_PORT=8080 DECNET_WEB_PORT=8080
DECNET_ADMIN_USER=admin DECNET_ADMIN_USER=admin
DECNET_ADMIN_PASSWORD=admin DECNET_ADMIN_PASSWORD=admin
DECNET_DEVELOPER=False

1
.gitignore vendored
View File

@@ -1,4 +1,5 @@
.venv/ .venv/
.claude/
__pycache__/ __pycache__/
*.pyc *.pyc
*.pyo *.pyo

37
BUGS.md Normal file
View File

@@ -0,0 +1,37 @@
# BUGS
Active bugs detected during development. Do not fix until noted otherwise.
---
## BUG-001 — Split-brain model imports across router files (Gemini SQLModel migration)
**Detected:** 2026-04-09
**Status:** Open — do not fix, migration in progress
**Symptom:** `from decnet.web.api import app` fails with `ModuleNotFoundError: No module named 'decnet.web.models'`
**Root cause:** Gemini's SQLModel migration is partially complete. Models were moved to `decnet/web/db/models.py`, but three router files were not updated and still import from the old `decnet.web.models` path:
| File | Stale import |
|------|--------------|
| `decnet/web/router/auth/api_login.py:12` | `from decnet.web.models import LoginRequest, Token` |
| `decnet/web/router/auth/api_change_pass.py:7` | `from decnet.web.models import ChangePasswordRequest` |
| `decnet/web/router/stats/api_get_stats.py:6` | `from decnet.web.models import StatsResponse` |
**Fix:** Update those three files to import from `decnet.web.db.models` (consistent with the other router files already migrated).
**Impact:** All `tests/api/` tests fail to collect. Web server cannot start.
---
## BUG-002 — `decnet/web/db/sqlite/repository.py` depends on `sqlalchemy` directly
**Detected:** 2026-04-09
**Status:** Resolved (dependency installed via `pip install -e ".[dev]"`)
**Symptom:** `ModuleNotFoundError: No module named 'sqlalchemy'` before `sqlmodel` was installed.
**Root cause:** `sqlmodel>=0.0.16` was added to `pyproject.toml` but `pip install -e .` had not been re-run in the dev environment.
**Fix:** Run `pip install -e ".[dev]"`. Already applied.

175
DEBT.md Normal file
View File

@@ -0,0 +1,175 @@
# DECNET — Technical Debt Register
> Last updated: 2026-04-09 — All addressable debt cleared.
> Severity: 🔴 Critical · 🟠 High · 🟡 Medium · 🟢 Low
---
## 🔴 Critical
### ~~DEBT-001 — Hardcoded JWT fallback secret~~ ✅ RESOLVED
~~**File:** `decnet/env.py:15`~~
Fixed in commit `b6b046c`. `DECNET_JWT_SECRET` is now required; startup raises `ValueError` if unset or set to a known-bad value.
### ~~DEBT-002 — Default admin credentials in code~~ ✅ CLOSED (by design)
`DECNET_ADMIN_PASSWORD` defaults to `"admin"` intentionally — the web dashboard enforces a password change on first login (`must_change_password=1`). Startup enforcement removed as it broke tooling without adding meaningful security.
### ~~DEBT-003 — Hardcoded LDAP password placeholder~~ ✅ CLOSED (false positive)
`templates/ldap/server.py:73``"<sasl_or_unknown>"` is a log label for SASL auth attempts, not an operational credential. The LDAP template is a honeypot; it has no bind password of its own.
### ~~DEBT-004 — Wildcard CORS with no origin restriction~~ ✅ RESOLVED
~~**File:** `decnet/web/api.py:48-54`~~
Fixed in commit `b6b046c`. `allow_origins` now uses `DECNET_CORS_ORIGINS` (env var, defaults to `http://localhost:8080`). `allow_methods` and `allow_headers` tightened to explicit allowlists.
---
## 🟠 High
### ~~DEBT-005 — Auth module has zero test coverage~~ ✅ RESOLVED
~~**File:** `decnet/web/auth.py`~~
Comprehensive test suite added in `tests/api/` covering login, password change, token validation, and JWT edge cases.
### ~~DEBT-006 — Database layer has zero test coverage~~ ✅ RESOLVED
~~**File:** `decnet/web/sqlite_repository.py`~~
`tests/api/test_repository.py` added — covers log insertion, bounty CRUD, histogram queries, stats summary, and fuzz testing of all query paths. In-memory SQLite with `StaticPool` ensures full isolation.
### ~~DEBT-007 — Web API routes mostly untested~~ ✅ RESOLVED
~~**Files:** `decnet/web/router/` (all sub-modules)~~
Full coverage added across `tests/api/` — fleet, logs, bounty, stream, auth all have dedicated test modules with both functional and fuzz test cases.
### ~~DEBT-008 — Auth token accepted via query string~~ ✅ RESOLVED
~~**File:** `decnet/web/dependencies.py:33-34`~~
Query-string token fallback removed. `get_current_user` now accepts only `Authorization: Bearer <token>` header. Tokens no longer appear in access logs or browser history.
### ~~DEBT-009 — Inconsistent and unstructured logging across templates~~ ✅ CLOSED (false positive)
All service templates already import from `decnet_logging` and use `syslog_line()` for structured output. The `print(line, flush=True)` present in some templates is the intentional Docker stdout channel for container log forwarding — not unstructured debug output.
### ~~DEBT-010 — `decnet_logging.py` duplicated across all 19 service templates~~ ✅ RESOLVED
~~**Files:** `templates/*/decnet_logging.py`~~
All 22 per-directory copies deleted. Canonical source lives at `templates/decnet_logging.py`. `deployer.py` now calls `_sync_logging_helper()` before `docker compose up` — it copies the canonical file into each active template build context automatically.
---
## 🟡 Medium
### DEBT-011 — No database migration system
**File:** `decnet/web/db/sqlite/repository.py`
Schema is created during startup via `SQLModel.metadata.create_all`. There is no Alembic or equivalent migration layer. Schema changes across deployments require manual intervention or silently break existing databases.
**Status:** Architectural. Deferred — requires Alembic integration and migration history bootstrapping.
### ~~DEBT-012 — No environment variable validation schema~~ ✅ RESOLVED
~~**File:** `decnet/env.py`~~
`DECNET_API_PORT` and `DECNET_WEB_PORT` now validated via `_port()` — enforces integer type and 165535 range, raises `ValueError` with a clear message on bad input.
### ~~DEBT-013 — Unvalidated input on `decky_name` route parameter~~ ✅ RESOLVED
~~**File:** `decnet/web/router/fleet/api_mutate_decky.py:10`~~
`decky_name` now declared as `Path(..., pattern=r"^[a-z0-9\-]{1,64}$")` — FastAPI rejects non-matching values with 422 before any downstream processing.
### ~~DEBT-014 — Streaming endpoint has no error handling~~ ✅ RESOLVED
~~**File:** `decnet/web/router/stream/api_stream_events.py`~~
`event_generator()` now wrapped in `try/except`. `asyncio.CancelledError` is handled silently (clean disconnect). All other exceptions log server-side via `log.exception()` and yield an `event: error` SSE frame to the client.
### ~~DEBT-015 — Broad exception detail leaked to API clients~~ ✅ RESOLVED
~~**File:** `decnet/web/router/fleet/api_deploy_deckies.py:78`~~
Raw exception message no longer returned to client. Full exception now logged server-side via `log.exception()`. Client receives generic `"Deployment failed. Check server logs for details."`.
### ~~DEBT-016 — Unvalidated log query parameters~~ ✅ RESOLVED
~~**File:** `decnet/web/router/logs/api_get_logs.py:12-19`~~
`search` capped at `max_length=512`. `start_time` and `end_time` validated against `^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}$` regex pattern. FastAPI rejects invalid input with 422.
### ~~DEBT-017 — Silent DB lock retry during startup~~ ✅ RESOLVED
~~**File:** `decnet/web/api.py:20-26`~~
Each retry attempt now emits `log.warning("DB init attempt %d/5 failed: %s", attempt, exc)`. After all retries exhausted, `log.error()` is emitted so degraded startup is always visible in logs.
### ~~DEBT-018 — No Docker HEALTHCHECK in any template~~ ✅ RESOLVED
~~**Files:** All 20 `templates/*/Dockerfile`~~
All 24 Dockerfiles updated with:
```dockerfile
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD kill -0 1 || exit 1
```
### ~~DEBT-019 — Most template containers run as root~~ ✅ RESOLVED
~~**Files:** All `templates/*/Dockerfile` except Cowrie~~
All 24 Dockerfiles now create a `decnet` system user, use `setcap cap_net_bind_service+eip` on the Python binary (allows binding ports < 1024 without root), and drop to `USER decnet` before `ENTRYPOINT`.
### ~~DEBT-020 — Swagger/OpenAPI disabled in production~~ ✅ RESOLVED
~~**File:** `decnet/web/api.py:43-45`~~
All route decorators now declare `responses={401: {"description": "Not authenticated"}, 422: {"description": "Validation error"}}`. OpenAPI schema is complete for all endpoints.
### ~~DEBT-021 — `sqlite_repository.py` is a god module~~ ✅ RESOLVED
~~**File:** `decnet/web/sqlite_repository.py` (~400 lines)~~
Fully refactored to `decnet/web/db/` modular layout: `models.py` (SQLModel schema), `repository.py` (abstract base), `sqlite/repository.py` (SQLite implementation), `sqlite/database.py` (engine/session factory). Commit `de84cc6`.
### DEBT-026 — IMAP/POP3 bait emails not configurable via service config
**Files:** `templates/imap/server.py`, `templates/pop3/server.py`, `decnet/services/imap.py`, `decnet/services/pop3.py`
Bait emails are hardcoded. A stub env var `IMAP_EMAIL_SEED` is read but currently ignored. Full implementation requires:
1. `IMAP_EMAIL_SEED` points to a JSON file with a list of `{from_, to, subject, date, body}` dicts.
2. `templates/imap/server.py` loads and merges/replaces `_BAIT_EMAILS` from that file at startup.
3. `decnet/services/imap.py` `compose_fragment()` reads `service_cfg["email_seed"]` and injects `IMAP_EMAIL_SEED` + a bind-mount for the seed file into the compose fragment.
4. Same pattern for POP3 (`POP3_EMAIL_SEED`).
**Status:** Stub in place — full wiring deferred to next session.
---
### DEBT-027 — Dynamic Bait Store
**Files:** `templates/redis/server.py`, `templates/ftp/server.py`
The bait store and honeypot files are hardcoded. A dynamic injection framework should be created to populate this payload across different honeypots.
**Status:** Deferred — out of current scope.
---
## 🟢 Low
### ~~DEBT-022 — Debug `print()` in correlation engine~~ ✅ CLOSED (false positive)
`decnet/correlation/engine.py:20` — The `print()` call is inside the module docstring as a usage example, not in executable code. No production code path affected.
### DEBT-023 — Unpinned base Docker images
**Files:** All `templates/*/Dockerfile`
`debian:bookworm-slim` and similar tags are used without digest pinning. Image contents can silently change on `docker pull`, breaking reproducibility and supply-chain integrity.
**Status:** Deferred — requires `docker pull` access to resolve current digests for each base image.
### ~~DEBT-024 — Stale service version hardcoded in Redis template~~ ✅ RESOLVED
~~**File:** `templates/redis/server.py:15`~~
`REDIS_VERSION` updated from `"7.0.12"` to `"7.2.7"` (current stable).
### ~~DEBT-025 — No lock file for Python dependencies~~ ✅ RESOLVED
~~**Files:** Project root~~
`requirements.lock` generated via `pip freeze`. Reproducible installs now available via `pip install -r requirements.lock`.
---
## Summary
| ID | Severity | Area | Status |
|----|----------|------|--------|
| ~~DEBT-001~~ | ✅ | Security / Auth | resolved `b6b046c` |
| ~~DEBT-002~~ | ✅ | Security / Auth | closed (by design) |
| ~~DEBT-003~~ | ✅ | Security / Infra | closed (false positive) |
| ~~DEBT-004~~ | ✅ | Security / API | resolved `b6b046c` |
| ~~DEBT-005~~ | ✅ | Testing | resolved |
| ~~DEBT-006~~ | ✅ | Testing | resolved |
| ~~DEBT-007~~ | ✅ | Testing | resolved |
| ~~DEBT-008~~ | ✅ | Security / Auth | resolved |
| ~~DEBT-009~~ | ✅ | Observability | closed (false positive) |
| ~~DEBT-010~~ | ✅ | Code Duplication | resolved |
| DEBT-011 | 🟡 Medium | DB / Migrations | deferred (Alembic scope) |
| ~~DEBT-012~~ | ✅ | Config | resolved |
| ~~DEBT-013~~ | ✅ | Security / Input | resolved |
| ~~DEBT-014~~ | ✅ | Reliability | resolved |
| ~~DEBT-015~~ | ✅ | Security / API | resolved |
| ~~DEBT-016~~ | ✅ | Security / API | resolved |
| ~~DEBT-017~~ | ✅ | Reliability | resolved |
| ~~DEBT-018~~ | ✅ | Infra | resolved |
| ~~DEBT-019~~ | ✅ | Security / Infra | resolved |
| ~~DEBT-020~~ | ✅ | Docs | resolved |
| ~~DEBT-021~~ | ✅ | Architecture | resolved `de84cc6` |
| ~~DEBT-022~~ | ✅ | Code Quality | closed (false positive) |
| DEBT-023 | 🟢 Low | Infra | deferred (needs docker pull) |
| ~~DEBT-024~~ | ✅ | Infra | resolved |
| ~~DEBT-025~~ | ✅ | Build | resolved |
| DEBT-026 | 🟡 Medium | Features | deferred (out of scope) |
| DEBT-027 | 🟡 Medium | Features | deferred (out of scope) |
**Remaining open:** DEBT-011 (Alembic), DEBT-023 (image pinning), DEBT-026 (modular mailboxes), DEBT-027 (Dynamic bait store)
**Estimated remaining effort:** ~10 hours

View File

@@ -82,7 +82,7 @@ Host NIC (eth0)
- Runtime state is persisted in `decnet-state.json`. - Runtime state is persisted in `decnet-state.json`.
- Do not modify this file manually. - Do not modify this file manually.
- **General Development Guidelines**: - **General Development Guidelines**:
- **Never** commit broken code. - **Never** commit broken code, or before running `pytest`s or `bandit` at the project level.
- **No matter how small** the changes, they must be committed. - **No matter how small** the changes, they must be committed.
- **If new features are addedd** new tests must be added, too. - **If new features are addedd** new tests must be added, too.
- **Never present broken code to the user**. Test, validate, then present. - **Never present broken code to the user**. Test, validate, then present.

View File

@@ -474,7 +474,7 @@ Key/value pairs are passed directly to the service plugin as persona config. Com
| `mongodb` | `mongo_version` | | `mongodb` | `mongo_version` |
| `elasticsearch` | `es_version`, `cluster_name` | | `elasticsearch` | `es_version`, `cluster_name` |
| `ldap` | `base_dn`, `domain` | | `ldap` | `base_dn`, `domain` |
| `snmp` | `snmp_community`, `sys_descr` | | `snmp` | `snmp_community`, `sys_descr`, `snmp_archetype` (picks predefined sysDescr for `water_plant`, `hospital`, etc.) |
| `mqtt` | `mqtt_version` | | `mqtt` | `mqtt_version` |
| `sip` | `sip_server`, `sip_domain` | | `sip` | `sip_server`, `sip_domain` |
| `k8s` | `k8s_version` | | `k8s` | `k8s_version` |

40
arche-test.ini Normal file
View File

@@ -0,0 +1,40 @@
# arche-test.ini
# OS fingerprint smoke-test fleet.
#
# One group per OS family, each spinning up 2 deckies.
# Deploy with:
# sudo .venv/bin/decnet deploy --config arche-test.ini --dry-run
# sudo .venv/bin/decnet deploy --config arche-test.ini --interface eth0
#
# After deploy, verify with:
# sudo nmap -O --osscan-guess <ip>
# sudo p0f -i <iface> -p
# ---- Linux (TTL 64, timestamps on, ECN offer) ----
[os-linux]
nmap_os=linux
services=ssh,http
amount=2
# ---- Windows (TTL 128, timestamps off, no ECN) ----
[os-windows]
nmap_os=windows
services=smb,rdp
amount=2
# ---- BSD (TTL 64, timestamps on, no ECN) ----
[os-bsd]
nmap_os=bsd
services=ssh,http
amount=2
# ---- Embedded (TTL 255, timestamps off, no SACK, no window scaling) ----
[os-embedded]
nmap_os=embedded
services=snmp
amount=2
# ---- Cisco (TTL 255, timestamps off, no SACK, ip_no_pmtu_disc on) ----
[os-cisco]
nmap_os=cisco
services=snmp
amount=2

View File

@@ -148,7 +148,7 @@ ARCHETYPES: dict[str, Archetype] = {
slug="deaddeck", slug="deaddeck",
display_name="Deaddeck (Entry Point)", display_name="Deaddeck (Entry Point)",
description="Internet-facing entry point with real interactive SSH — no honeypot emulation", description="Internet-facing entry point with real interactive SSH — no honeypot emulation",
services=["real_ssh"], services=["ssh"],
preferred_distros=["debian", "ubuntu22"], preferred_distros=["debian", "ubuntu22"],
nmap_os="linux", nmap_os="linux",
), ),
@@ -167,4 +167,4 @@ def all_archetypes() -> dict[str, Archetype]:
def random_archetype() -> Archetype: def random_archetype() -> Archetype:
return random.choice(list(ARCHETYPES.values())) return random.choice(list(ARCHETYPES.values())) # nosec B311

View File

@@ -8,7 +8,7 @@ Usage:
decnet services decnet services
""" """
import random import signal
from typing import Optional from typing import Optional
import typer import typer
@@ -28,7 +28,8 @@ from decnet.config import (
DecnetConfig, DecnetConfig,
random_hostname, random_hostname,
) )
from decnet.distros import all_distros, get_distro, random_distro from decnet.distros import all_distros, get_distro
from decnet.fleet import all_service_names, build_deckies, build_deckies_from_ini
from decnet.ini_loader import IniConfig, load_ini from decnet.ini_loader import IniConfig, load_ini
from decnet.network import detect_interface, detect_subnet, allocate_ips, get_host_ip from decnet.network import detect_interface, detect_subnet, allocate_ips, get_host_ip
from decnet.services.registry import all_services from decnet.services.registry import all_services
@@ -40,171 +41,31 @@ app = typer.Typer(
) )
console = Console() console = Console()
def _all_service_names() -> list[str]:
"""Return all registered service names from the live plugin registry."""
return sorted(all_services().keys())
def _kill_api() -> None:
"""Find and kill any running DECNET API (uvicorn) or mutator processes."""
import psutil
import os
def _resolve_distros( _killed: bool = False
distros_explicit: list[str] | None, for _proc in psutil.process_iter(['pid', 'name', 'cmdline']):
randomize_distros: bool, try:
n: int, _cmd = _proc.info['cmdline']
archetype: Archetype | None = None, if not _cmd:
) -> list[str]: continue
"""Return a list of n distro slugs based on CLI flags or archetype preference.""" if "uvicorn" in _cmd and "decnet.web.api:app" in _cmd:
if distros_explicit: console.print(f"[yellow]Stopping DECNET API (PID {_proc.info['pid']})...[/]")
return [distros_explicit[i % len(distros_explicit)] for i in range(n)] os.kill(_proc.info['pid'], signal.SIGTERM)
if randomize_distros: _killed = True
return [random_distro().slug for _ in range(n)] elif "decnet.cli" in _cmd and "mutate" in _cmd and "--watch" in _cmd:
if archetype: console.print(f"[yellow]Stopping DECNET Mutator Watcher (PID {_proc.info['pid']})...[/]")
pool = archetype.preferred_distros os.kill(_proc.info['pid'], signal.SIGTERM)
return [pool[i % len(pool)] for i in range(n)] _killed = True
# Default: cycle through all distros to maximize heterogeneity except (psutil.NoSuchProcess, psutil.AccessDenied):
slugs = list(all_distros().keys()) continue
return [slugs[i % len(slugs)] for i in range(n)]
def _build_deckies(
n: int,
ips: list[str],
services_explicit: list[str] | None,
randomize_services: bool,
distros_explicit: list[str] | None = None,
randomize_distros: bool = False,
archetype: Archetype | None = None,
) -> list[DeckyConfig]:
deckies = []
used_combos: set[frozenset] = set()
distro_slugs = _resolve_distros(distros_explicit, randomize_distros, n, archetype)
for i, ip in enumerate(ips):
name = f"decky-{i + 1:02d}"
distro = get_distro(distro_slugs[i])
hostname = random_hostname(distro.slug)
if services_explicit:
svc_list = services_explicit
elif archetype:
svc_list = list(archetype.services)
elif randomize_services:
svc_pool = _all_service_names()
attempts = 0
while True:
count = random.randint(1, min(3, len(svc_pool)))
chosen = frozenset(random.sample(svc_pool, count))
attempts += 1
if chosen not in used_combos or attempts > 20:
break
svc_list = list(chosen)
used_combos.add(chosen)
else:
typer.echo("Error: provide --services, --archetype, or --randomize-services.", err=True)
raise typer.Exit(1)
deckies.append(
DeckyConfig(
name=name,
ip=ip,
services=svc_list,
distro=distro.slug,
base_image=distro.image,
build_base=distro.build_base,
hostname=hostname,
archetype=archetype.slug if archetype else None,
nmap_os=archetype.nmap_os if archetype else "linux",
)
)
return deckies
def _build_deckies_from_ini(
ini: IniConfig,
subnet_cidr: str,
gateway: str,
host_ip: str,
randomize: bool,
cli_mutate_interval: int | None = None,
) -> list[DeckyConfig]:
"""Build DeckyConfig list from an IniConfig, auto-allocating missing IPs."""
from ipaddress import IPv4Address, IPv4Network
import time
now = time.time()
explicit_ips: set[IPv4Address] = {
IPv4Address(s.ip) for s in ini.deckies if s.ip
}
net = IPv4Network(subnet_cidr, strict=False)
reserved = {
net.network_address,
net.broadcast_address,
IPv4Address(gateway),
IPv4Address(host_ip),
} | explicit_ips
auto_pool = (str(addr) for addr in net.hosts() if addr not in reserved)
deckies: list[DeckyConfig] = []
for spec in ini.deckies:
# Resolve archetype (if any) — explicit services/distro override it
arch: Archetype | None = None
if spec.archetype:
arch = get_archetype(spec.archetype)
# Distro: archetype preferred list → random → global cycle
distro_pool = arch.preferred_distros if arch else list(all_distros().keys())
distro = get_distro(distro_pool[len(deckies) % len(distro_pool)])
hostname = random_hostname(distro.slug)
ip = spec.ip or next(auto_pool, None)
if ip is None:
raise ValueError(f"Not enough free IPs in {subnet_cidr} while assigning IP for '{spec.name}'.")
if spec.services:
known = set(_all_service_names())
unknown = [s for s in spec.services if s not in known]
if unknown:
raise ValueError(
f"Unknown service(s) in [{spec.name}]: {unknown}. "
f"Available: {_all_service_names()}"
)
svc_list = spec.services
elif arch:
svc_list = list(arch.services)
elif randomize:
svc_pool = _all_service_names()
count = random.randint(1, min(3, len(svc_pool)))
svc_list = random.sample(svc_pool, count)
else:
raise ValueError(
f"Decky '[{spec.name}]' has no services= in config. "
"Add services=, archetype=, or use --randomize-services."
)
# nmap_os priority: explicit INI key > archetype default > "linux"
resolved_nmap_os = spec.nmap_os or (arch.nmap_os if arch else "linux")
# mutation interval priority: CLI > per-decky INI > global INI
decky_mutate_interval = cli_mutate_interval
if decky_mutate_interval is None:
decky_mutate_interval = spec.mutate_interval if spec.mutate_interval is not None else ini.mutate_interval
deckies.append(DeckyConfig(
name=spec.name,
ip=ip,
services=svc_list,
distro=distro.slug,
base_image=distro.image,
build_base=distro.build_base,
hostname=hostname,
archetype=arch.slug if arch else None,
service_config=spec.service_config,
nmap_os=resolved_nmap_os,
mutate_interval=decky_mutate_interval,
last_mutated=now,
))
return deckies
if _killed:
console.print("[green]Background processes stopped.[/]")
@app.command() @app.command()
@@ -214,7 +75,7 @@ def api(
log_file: str = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", help="Path to the DECNET log file to monitor"), log_file: str = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", help="Path to the DECNET log file to monitor"),
) -> None: ) -> None:
"""Run the DECNET API and Web Dashboard in standalone mode.""" """Run the DECNET API and Web Dashboard in standalone mode."""
import subprocess import subprocess # nosec B404
import sys import sys
import os import os
@@ -222,7 +83,7 @@ def api(
_env: dict[str, str] = os.environ.copy() _env: dict[str, str] = os.environ.copy()
_env["DECNET_INGEST_LOG_FILE"] = str(log_file) _env["DECNET_INGEST_LOG_FILE"] = str(log_file)
try: try:
subprocess.run( subprocess.run( # nosec B603 B404
[sys.executable, "-m", "uvicorn", "decnet.web.api:app", "--host", host, "--port", str(port)], [sys.executable, "-m", "uvicorn", "decnet.web.api:app", "--host", host, "--port", str(port)],
env=_env env=_env
) )
@@ -243,12 +104,12 @@ def deploy(
randomize_services: bool = typer.Option(False, "--randomize-services", help="Assign random services to each decky"), randomize_services: bool = typer.Option(False, "--randomize-services", help="Assign random services to each decky"),
distro: Optional[str] = typer.Option(None, "--distro", help="Comma-separated distro slugs, e.g. debian,ubuntu22,rocky9"), distro: Optional[str] = typer.Option(None, "--distro", help="Comma-separated distro slugs, e.g. debian,ubuntu22,rocky9"),
randomize_distros: bool = typer.Option(False, "--randomize-distros", help="Assign a random distro to each decky"), randomize_distros: bool = typer.Option(False, "--randomize-distros", help="Assign a random distro to each decky"),
log_target: Optional[str] = typer.Option(None, "--log-target", help="Forward logs to ip:port (e.g. 192.168.1.5:5140)"), log_file: Optional[str] = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", help="Host path for the collector to write RFC 5424 logs (e.g. /var/log/decnet/decnet.log)"),
log_file: Optional[str] = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", help="Write RFC 5424 syslog to this path inside containers (e.g. /var/log/decnet/decnet.log)"),
archetype_name: Optional[str] = typer.Option(None, "--archetype", "-a", help="Machine archetype slug (e.g. linux-server, windows-workstation)"), archetype_name: Optional[str] = typer.Option(None, "--archetype", "-a", help="Machine archetype slug (e.g. linux-server, windows-workstation)"),
mutate_interval: Optional[int] = typer.Option(30, "--mutate-interval", help="Automatically rotate services every N minutes"), mutate_interval: Optional[int] = typer.Option(30, "--mutate-interval", help="Automatically rotate services every N minutes"),
dry_run: bool = typer.Option(False, "--dry-run", help="Generate compose file without starting containers"), dry_run: bool = typer.Option(False, "--dry-run", help="Generate compose file without starting containers"),
no_cache: bool = typer.Option(False, "--no-cache", help="Force rebuild all images, ignoring Docker layer cache"), no_cache: bool = typer.Option(False, "--no-cache", help="Force rebuild all images, ignoring Docker layer cache"),
parallel: bool = typer.Option(False, "--parallel", help="Build all images concurrently (enables BuildKit, separates build from up)"),
ipvlan: bool = typer.Option(False, "--ipvlan", help="Use IPvlan L2 instead of MACVLAN (required on WiFi interfaces)"), ipvlan: bool = typer.Option(False, "--ipvlan", help="Use IPvlan L2 instead of MACVLAN (required on WiFi interfaces)"),
config_file: Optional[str] = typer.Option(None, "--config", "-c", help="Path to INI config file"), config_file: Optional[str] = typer.Option(None, "--config", "-c", help="Path to INI config file"),
api: bool = typer.Option(False, "--api", help="Start the FastAPI backend to ingest and serve logs"), api: bool = typer.Option(False, "--api", help="Start the FastAPI backend to ingest and serve logs"),
@@ -270,7 +131,6 @@ def deploy(
console.print(f"[red]{e}[/]") console.print(f"[red]{e}[/]")
raise typer.Exit(1) raise typer.Exit(1)
# CLI flags override INI values when explicitly provided
iface = interface or ini.interface or detect_interface() iface = interface or ini.interface or detect_interface()
subnet_cidr = subnet or ini.subnet subnet_cidr = subnet or ini.subnet
effective_gateway = ini.gateway effective_gateway = ini.gateway
@@ -284,7 +144,6 @@ def deploy(
f"[dim]Subnet:[/] {subnet_cidr} [dim]Gateway:[/] {effective_gateway} " f"[dim]Subnet:[/] {subnet_cidr} [dim]Gateway:[/] {effective_gateway} "
f"[dim]Host IP:[/] {host_ip}") f"[dim]Host IP:[/] {host_ip}")
# Register bring-your-own services from INI before validation
if ini.custom_services: if ini.custom_services:
from decnet.custom_service import CustomService from decnet.custom_service import CustomService
from decnet.services.registry import register_custom_service from decnet.services.registry import register_custom_service
@@ -298,10 +157,9 @@ def deploy(
) )
) )
effective_log_target = log_target or ini.log_target
effective_log_file = log_file effective_log_file = log_file
try: try:
decky_configs = _build_deckies_from_ini( decky_configs = build_deckies_from_ini(
ini, subnet_cidr, effective_gateway, host_ip, randomize_services, cli_mutate_interval=mutate_interval ini, subnet_cidr, effective_gateway, host_ip, randomize_services, cli_mutate_interval=mutate_interval
) )
except ValueError as e: except ValueError as e:
@@ -317,13 +175,12 @@ def deploy(
services_list = [s.strip() for s in services.split(",")] if services else None services_list = [s.strip() for s in services.split(",")] if services else None
if services_list: if services_list:
known = set(_all_service_names()) known = set(all_service_names())
unknown = [s for s in services_list if s not in known] unknown = [s for s in services_list if s not in known]
if unknown: if unknown:
console.print(f"[red]Unknown service(s): {unknown}. Available: {_all_service_names()}[/]") console.print(f"[red]Unknown service(s): {unknown}. Available: {all_service_names()}[/]")
raise typer.Exit(1) raise typer.Exit(1)
# Resolve archetype if provided
arch: Archetype | None = None arch: Archetype | None = None
if archetype_name: if archetype_name:
try: try:
@@ -357,15 +214,13 @@ def deploy(
raise typer.Exit(1) raise typer.Exit(1)
ips = allocate_ips(subnet_cidr, effective_gateway, host_ip, deckies, ip_start) ips = allocate_ips(subnet_cidr, effective_gateway, host_ip, deckies, ip_start)
decky_configs = _build_deckies( decky_configs = build_deckies(
deckies, ips, services_list, randomize_services, deckies, ips, services_list, randomize_services,
distros_explicit=distros_list, randomize_distros=randomize_distros, distros_explicit=distros_list, randomize_distros=randomize_distros,
archetype=arch, mutate_interval=mutate_interval, archetype=arch, mutate_interval=mutate_interval,
) )
effective_log_target = log_target
effective_log_file = log_file effective_log_file = log_file
# Handle automatic log file for API
if api and not effective_log_file: if api and not effective_log_file:
effective_log_file = os.path.join(os.getcwd(), "decnet.log") effective_log_file = os.path.join(os.getcwd(), "decnet.log")
console.print(f"[cyan]API mode enabled: defaulting log-file to {effective_log_file}[/]") console.print(f"[cyan]API mode enabled: defaulting log-file to {effective_log_file}[/]")
@@ -376,52 +231,71 @@ def deploy(
subnet=subnet_cidr, subnet=subnet_cidr,
gateway=effective_gateway, gateway=effective_gateway,
deckies=decky_configs, deckies=decky_configs,
log_target=effective_log_target,
log_file=effective_log_file, log_file=effective_log_file,
ipvlan=ipvlan, ipvlan=ipvlan,
mutate_interval=mutate_interval, mutate_interval=mutate_interval,
) )
if effective_log_target and not dry_run: from decnet.engine import deploy as _deploy
from decnet.logging.forwarder import probe_log_target _deploy(config, dry_run=dry_run, no_cache=no_cache, parallel=parallel)
if not probe_log_target(effective_log_target):
console.print(f"[yellow]Warning: log target {effective_log_target} is unreachable. "
"Logs will be lost if it stays down.[/]")
from decnet.deployer import deploy as _deploy
_deploy(config, dry_run=dry_run, no_cache=no_cache)
if mutate_interval is not None and not dry_run: if mutate_interval is not None and not dry_run:
import subprocess import subprocess # nosec B404
import sys import sys
console.print(f"[green]Starting DECNET Mutator watcher in the background (interval: {mutate_interval}m)...[/]") console.print(f"[green]Starting DECNET Mutator watcher in the background (interval: {mutate_interval}m)...[/]")
try: try:
subprocess.Popen( subprocess.Popen( # nosec B603
[sys.executable, "-m", "decnet.cli", "mutate", "--watch"], [sys.executable, "-m", "decnet.cli", "mutate", "--watch"],
stdout=subprocess.DEVNULL, stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT stderr=subprocess.STDOUT,
start_new_session=True,
) )
except (FileNotFoundError, subprocess.SubprocessError): except (FileNotFoundError, subprocess.SubprocessError):
console.print("[red]Failed to start mutator watcher.[/]") console.print("[red]Failed to start mutator watcher.[/]")
if effective_log_file and not dry_run and not api:
import subprocess # noqa: F811 # nosec B404
import sys
from pathlib import Path as _Path
_collector_err = _Path(effective_log_file).with_suffix(".collector.log")
console.print(f"[bold cyan]Starting log collector[/] → {effective_log_file}")
subprocess.Popen( # nosec B603
[sys.executable, "-m", "decnet.cli", "collect", "--log-file", str(effective_log_file)],
stdin=subprocess.DEVNULL,
stdout=open(_collector_err, "a"), # nosec B603
stderr=subprocess.STDOUT,
start_new_session=True,
)
if api and not dry_run: if api and not dry_run:
import subprocess import subprocess # nosec B404
import sys import sys
console.print(f"[green]Starting DECNET API on port {api_port}...[/]") console.print(f"[green]Starting DECNET API on port {api_port}...[/]")
_env: dict[str, str] = os.environ.copy() _env: dict[str, str] = os.environ.copy()
_env["DECNET_INGEST_LOG_FILE"] = str(effective_log_file) _env["DECNET_INGEST_LOG_FILE"] = str(effective_log_file or "")
try: try:
subprocess.Popen( subprocess.Popen( # nosec B603
[sys.executable, "-m", "uvicorn", "decnet.web.api:app", "--host", "0.0.0.0", "--port", str(api_port)], [sys.executable, "-m", "uvicorn", "decnet.web.api:app", "--host", DECNET_API_HOST, "--port", str(api_port)],
env=_env, env=_env,
stdout=subprocess.DEVNULL, stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT stderr=subprocess.STDOUT
) )
console.print(f"[dim]API running at http://0.0.0.0:{api_port}[/]") console.print(f"[dim]API running at http://{DECNET_API_HOST}:{api_port}[/]")
except (FileNotFoundError, subprocess.SubprocessError): except (FileNotFoundError, subprocess.SubprocessError):
console.print("[red]Failed to start API. Ensure 'uvicorn' is installed in the current environment.[/]") console.print("[red]Failed to start API. Ensure 'uvicorn' is installed in the current environment.[/]")
@app.command()
def collect(
log_file: str = typer.Option(DECNET_INGEST_LOG_FILE, "--log-file", "-f", help="Path to write RFC 5424 syslog lines and .json records"),
) -> None:
"""Stream Docker logs from all running decky service containers to a log file."""
import asyncio
from decnet.collector import log_collector_worker
console.print(f"[bold cyan]Collector starting[/] → {log_file}")
asyncio.run(log_collector_worker(log_file))
@app.command() @app.command()
def mutate( def mutate(
watch: bool = typer.Option(False, "--watch", "-w", help="Run continuously and mutate deckies according to their interval"), watch: bool = typer.Option(False, "--watch", "-w", help="Run continuously and mutate deckies according to their interval"),
@@ -446,7 +320,7 @@ def mutate(
@app.command() @app.command()
def status() -> None: def status() -> None:
"""Show running deckies and their status.""" """Show running deckies and their status."""
from decnet.deployer import status as _status from decnet.engine import status as _status
_status() _status()
@@ -460,9 +334,12 @@ def teardown(
console.print("[red]Specify --all or --id <name>.[/]") console.print("[red]Specify --all or --id <name>.[/]")
raise typer.Exit(1) raise typer.Exit(1)
from decnet.deployer import teardown as _teardown from decnet.engine import teardown as _teardown
_teardown(decky_id=id_) _teardown(decky_id=id_)
if all_:
_kill_api()
@app.command(name="services") @app.command(name="services")
def list_services() -> None: def list_services() -> None:
@@ -572,7 +449,6 @@ def serve_web(
import socketserver import socketserver
from pathlib import Path from pathlib import Path
# Assuming decnet_web/dist is relative to the project root
dist_dir = Path(__file__).parent.parent / "decnet_web" / "dist" dist_dir = Path(__file__).parent.parent / "decnet_web" / "dist"
if not dist_dir.exists(): if not dist_dir.exists():
@@ -581,10 +457,8 @@ def serve_web(
class SPAHTTPRequestHandler(http.server.SimpleHTTPRequestHandler): class SPAHTTPRequestHandler(http.server.SimpleHTTPRequestHandler):
def do_GET(self): def do_GET(self):
# Try to serve the requested file
path = self.translate_path(self.path) path = self.translate_path(self.path)
if not Path(path).exists() or Path(path).is_dir(): if not Path(path).exists() or Path(path).is_dir():
# If not found or is a directory, serve index.html (for React Router)
self.path = "/index.html" self.path = "/index.html"
return super().do_GET() return super().do_GET()
@@ -597,3 +471,6 @@ def serve_web(
httpd.serve_forever() httpd.serve_forever()
except KeyboardInterrupt: except KeyboardInterrupt:
console.print("\n[dim]Shutting down dashboard server.[/]") console.print("\n[dim]Shutting down dashboard server.[/]")
if __name__ == '__main__': # pragma: no cover
app()

View File

@@ -0,0 +1,13 @@
from decnet.collector.worker import (
is_service_container,
is_service_event,
log_collector_worker,
parse_rfc5424,
)
__all__ = [
"is_service_container",
"is_service_event",
"log_collector_worker",
"parse_rfc5424",
]

200
decnet/collector/worker.py Normal file
View File

@@ -0,0 +1,200 @@
"""
Host-side Docker log collector.
Streams stdout from all running decky service containers via the Docker SDK,
writes RFC 5424 lines to <log_file> and parsed JSON records to <log_file>.json.
The ingester tails the .json file; rsyslog can consume the .log file independently.
"""
import asyncio
import json
import logging
import re
from datetime import datetime
from pathlib import Path
from typing import Any, Optional
logger = logging.getLogger("decnet.collector")
# ─── RFC 5424 parser ──────────────────────────────────────────────────────────
_RFC5424_RE = re.compile(
r"^<\d+>1 "
r"(\S+) " # 1: TIMESTAMP
r"(\S+) " # 2: HOSTNAME (decky name)
r"(\S+) " # 3: APP-NAME (service)
r"- " # PROCID always NILVALUE
r"(\S+) " # 4: MSGID (event_type)
r"(.+)$", # 5: SD element + optional MSG
)
_SD_BLOCK_RE = re.compile(r'\[decnet@55555\s+(.*?)\]', re.DOTALL)
_PARAM_RE = re.compile(r'(\w+)="((?:[^"\\]|\\.)*)"')
_IP_FIELDS = ("src_ip", "src", "client_ip", "remote_ip", "ip")
def parse_rfc5424(line: str) -> Optional[dict[str, Any]]:
"""
Parse an RFC 5424 DECNET log line into a structured dict.
Returns None if the line does not match the expected format.
"""
m = _RFC5424_RE.match(line)
if not m:
return None
ts_raw, decky, service, event_type, sd_rest = m.groups()
fields: dict[str, str] = {}
msg: str = ""
if sd_rest.startswith("-"):
msg = sd_rest[1:].lstrip()
elif sd_rest.startswith("["):
block = _SD_BLOCK_RE.search(sd_rest)
if block:
for k, v in _PARAM_RE.findall(block.group(1)):
fields[k] = v.replace('\\"', '"').replace("\\\\", "\\").replace("\\]", "]")
msg_match = re.search(r'\]\s+(.+)$', sd_rest)
if msg_match:
msg = msg_match.group(1).strip()
else:
msg = sd_rest
attacker_ip = "Unknown"
for fname in _IP_FIELDS:
if fname in fields:
attacker_ip = fields[fname]
break
try:
ts_formatted = datetime.fromisoformat(ts_raw).strftime("%Y-%m-%d %H:%M:%S")
except ValueError:
ts_formatted = ts_raw
return {
"timestamp": ts_formatted,
"decky": decky,
"service": service,
"event_type": event_type,
"attacker_ip": attacker_ip,
"fields": fields,
"msg": msg,
"raw_line": line,
}
# ─── Container helpers ────────────────────────────────────────────────────────
def _load_service_container_names() -> set[str]:
"""
Return the exact set of service container names from decnet-state.json.
Format: {decky_name}-{service_name}, e.g. 'omega-decky-smtp'.
Returns an empty set if no state file exists.
"""
from decnet.config import load_state
state = load_state()
if state is None:
return set()
config, _ = state
names: set[str] = set()
for decky in config.deckies:
for svc in decky.services:
names.add(f"{decky.name}-{svc.replace('_', '-')}")
return names
def is_service_container(container) -> bool:
"""Return True if this Docker container is a known DECNET service container."""
name = (container if isinstance(container, str) else container.name).lstrip("/")
return name in _load_service_container_names()
def is_service_event(attrs: dict) -> bool:
"""Return True if a Docker start event is for a known DECNET service container."""
name = attrs.get("name", "").lstrip("/")
return name in _load_service_container_names()
# ─── Blocking stream worker (runs in a thread) ────────────────────────────────
def _stream_container(container_id: str, log_path: Path, json_path: Path) -> None:
"""Stream logs from one container and append to the host log files."""
import docker # type: ignore[import]
try:
client = docker.from_env()
container = client.containers.get(container_id)
log_stream = container.logs(stream=True, follow=True, stdout=True, stderr=False)
buf = ""
with (
open(log_path, "a", encoding="utf-8") as lf,
open(json_path, "a", encoding="utf-8") as jf,
):
for chunk in log_stream:
buf += chunk.decode("utf-8", errors="replace")
while "\n" in buf:
line, buf = buf.split("\n", 1)
line = line.rstrip()
if not line:
continue
lf.write(line + "\n")
lf.flush()
parsed = parse_rfc5424(line)
if parsed:
jf.write(json.dumps(parsed) + "\n")
jf.flush()
except Exception as exc:
logger.debug("Log stream ended for container %s: %s", container_id, exc)
# ─── Async collector ──────────────────────────────────────────────────────────
async def log_collector_worker(log_file: str) -> None:
"""
Background task: streams Docker logs from all running decky service
containers, writing RFC 5424 lines to log_file and parsed JSON records
to log_file.json for the ingester to consume.
Watches Docker events to pick up containers started after initial scan.
"""
import docker # type: ignore[import]
log_path = Path(log_file)
json_path = log_path.with_suffix(".json")
log_path.parent.mkdir(parents=True, exist_ok=True)
active: dict[str, asyncio.Task[None]] = {}
loop = asyncio.get_running_loop()
def _spawn(container_id: str, container_name: str) -> None:
if container_id not in active or active[container_id].done():
active[container_id] = asyncio.ensure_future(
asyncio.to_thread(_stream_container, container_id, log_path, json_path),
loop=loop,
)
logger.info("Collecting logs from container: %s", container_name)
try:
client = docker.from_env()
for container in client.containers.list():
if is_service_container(container):
_spawn(container.id, container.name.lstrip("/"))
def _watch_events() -> None:
for event in client.events(
decode=True,
filters={"type": "container", "event": "start"},
):
attrs = event.get("Actor", {}).get("Attributes", {})
cid = event.get("id", "")
name = attrs.get("name", "")
if cid and is_service_event(attrs):
loop.call_soon_threadsafe(_spawn, cid, name)
await asyncio.to_thread(_watch_events)
except asyncio.CancelledError:
for task in active.values():
task.cancel()
raise
except Exception as exc:
logger.error("Collector error: %s", exc)

View File

@@ -6,6 +6,12 @@ Network model:
All service containers for that decky share the base's network namespace All service containers for that decky share the base's network namespace
via `network_mode: "service:<base>"`. From the outside, every service on via `network_mode: "service:<base>"`. From the outside, every service on
a given decky appears to come from the same IP — exactly like a real host. a given decky appears to come from the same IP — exactly like a real host.
Logging model:
Service containers write RFC 5424 lines to stdout. Docker captures them
via the json-file driver. The host-side collector (decnet.web.collector)
streams those logs and writes them to the host log file for the ingester
and rsyslog to consume. No bind mounts or shared volumes are needed.
""" """
from pathlib import Path from pathlib import Path
@@ -17,35 +23,19 @@ from decnet.network import MACVLAN_NETWORK_NAME
from decnet.os_fingerprint import get_os_sysctls from decnet.os_fingerprint import get_os_sysctls
from decnet.services.registry import get_service from decnet.services.registry import get_service
_CONTAINER_LOG_DIR = "/var/log/decnet" _DOCKER_LOGGING = {
"driver": "json-file",
_LOG_NETWORK = "decnet_logs" "options": {
"max-size": "10m",
"max-file": "5",
def _resolve_log_file(log_file: str) -> tuple[str, str]: },
""" }
Return (host_dir, container_log_path) for a user-supplied log file path.
The host path is resolved to absolute so Docker can bind-mount it.
All containers share the same host directory, mounted at _CONTAINER_LOG_DIR.
"""
host_path = Path(log_file).resolve()
host_dir = str(host_path.parent)
container_path = f"{_CONTAINER_LOG_DIR}/{host_path.name}"
return host_dir, container_path
def generate_compose(config: DecnetConfig) -> dict: def generate_compose(config: DecnetConfig) -> dict:
"""Build and return the full docker-compose data structure.""" """Build and return the full docker-compose data structure."""
services: dict = {} services: dict = {}
log_host_dir: str | None = None
log_container_path: str | None = None
if config.log_file:
log_host_dir, log_container_path = _resolve_log_file(config.log_file)
# Ensure the host log directory exists so Docker doesn't create it as root-owned
Path(log_host_dir).mkdir(parents=True, exist_ok=True)
for decky in config.deckies: for decky in config.deckies:
base_key = decky.name # e.g. "decky-01" base_key = decky.name # e.g. "decky-01"
@@ -62,8 +52,6 @@ def generate_compose(config: DecnetConfig) -> dict:
} }
}, },
} }
if config.log_target:
base["networks"][_LOG_NETWORK] = {}
# Inject TCP/IP stack sysctls to spoof the claimed OS fingerprint. # Inject TCP/IP stack sysctls to spoof the claimed OS fingerprint.
# Only the base container needs this — service containers inherit the # Only the base container needs this — service containers inherit the
@@ -77,23 +65,18 @@ def generate_compose(config: DecnetConfig) -> dict:
for svc_name in decky.services: for svc_name in decky.services:
svc = get_service(svc_name) svc = get_service(svc_name)
svc_cfg = decky.service_config.get(svc_name, {}) svc_cfg = decky.service_config.get(svc_name, {})
fragment = svc.compose_fragment( fragment = svc.compose_fragment(decky.name, service_cfg=svc_cfg)
decky.name, log_target=config.log_target, service_cfg=svc_cfg
)
# Inject the per-decky base image into build services so containers # Inject the per-decky base image into build services so containers
# vary by distro and don't all fingerprint as debian:bookworm-slim. # vary by distro and don't all fingerprint as debian:bookworm-slim.
# Services that need a fixed upstream image (e.g. conpot) can pre-set
# build.args.BASE_IMAGE in their compose_fragment() to opt out.
if "build" in fragment: if "build" in fragment:
fragment["build"].setdefault("args", {})["BASE_IMAGE"] = decky.build_base args = fragment["build"].setdefault("args", {})
args.setdefault("BASE_IMAGE", decky.build_base)
fragment.setdefault("environment", {}) fragment.setdefault("environment", {})
fragment["environment"]["HOSTNAME"] = decky.hostname fragment["environment"]["HOSTNAME"] = decky.hostname
if log_host_dir and log_container_path:
fragment["environment"]["DECNET_LOG_FILE"] = log_container_path
fragment.setdefault("volumes", [])
mount = f"{log_host_dir}:{_CONTAINER_LOG_DIR}"
if mount not in fragment["volumes"]:
fragment["volumes"].append(mount)
# Share the base container's network — no own IP needed # Share the base container's network — no own IP needed
fragment["network_mode"] = f"service:{base_key}" fragment["network_mode"] = f"service:{base_key}"
@@ -103,6 +86,9 @@ def generate_compose(config: DecnetConfig) -> dict:
fragment.pop("hostname", None) fragment.pop("hostname", None)
fragment.pop("networks", None) fragment.pop("networks", None)
# Rotate Docker logs so disk usage is bounded
fragment["logging"] = _DOCKER_LOGGING
services[f"{decky.name}-{svc_name}"] = fragment services[f"{decky.name}-{svc_name}"] = fragment
# Network definitions # Network definitions
@@ -111,8 +97,6 @@ def generate_compose(config: DecnetConfig) -> dict:
"external": True, # created by network.py before compose up "external": True, # created by network.py before compose up
} }
} }
if config.log_target:
networks[_LOG_NETWORK] = {"driver": "bridge", "internal": True}
return { return {
"version": "3.8", "version": "3.8",

View File

@@ -7,7 +7,7 @@ import json
from pathlib import Path from pathlib import Path
from typing import Literal from typing import Literal
from pydantic import BaseModel, field_validator from pydantic import BaseModel, field_validator # field_validator used by DeckyConfig
from decnet.distros import random_hostname as _random_hostname from decnet.distros import random_hostname as _random_hostname
@@ -49,21 +49,10 @@ class DecnetConfig(BaseModel):
subnet: str subnet: str
gateway: str gateway: str
deckies: list[DeckyConfig] deckies: list[DeckyConfig]
log_target: str | None = None # "ip:port" or None log_file: str | None = None # host path where the collector writes the log file
log_file: str | None = None # path for RFC 5424 syslog file output
ipvlan: bool = False # use IPvlan L2 instead of MACVLAN (WiFi-friendly) ipvlan: bool = False # use IPvlan L2 instead of MACVLAN (WiFi-friendly)
mutate_interval: int | None = DEFAULT_MUTATE_INTERVAL # global automatic rotation interval in minutes mutate_interval: int | None = DEFAULT_MUTATE_INTERVAL # global automatic rotation interval in minutes
@field_validator("log_target")
@classmethod
def validate_log_target(cls, v: str | None) -> str | None:
if v is None:
return v
parts = v.rsplit(":", 1)
if len(parts) != 2 or not parts[1].isdigit():
raise ValueError("log_target must be in ip:port format, e.g. 192.168.1.5:5140")
return v
def save_state(config: DecnetConfig, compose_path: Path) -> None: def save_state(config: DecnetConfig, compose_path: Path) -> None:
payload = { payload = {

View File

@@ -97,8 +97,8 @@ def random_hostname(distro_slug: str = "debian") -> str:
"""Generate a plausible hostname for the given distro style.""" """Generate a plausible hostname for the given distro style."""
profile = DISTROS.get(distro_slug) profile = DISTROS.get(distro_slug)
style = profile.hostname_style if profile else "generic" style = profile.hostname_style if profile else "generic"
word = random.choice(_NAME_WORDS) word = random.choice(_NAME_WORDS) # nosec B311
num = random.randint(10, 99) num = random.randint(10, 99) # nosec B311
if style == "rhel": if style == "rhel":
# RHEL/CentOS/Fedora convention: word+num.localdomain # RHEL/CentOS/Fedora convention: word+num.localdomain
@@ -107,7 +107,7 @@ def random_hostname(distro_slug: str = "debian") -> str:
return f"{word}-{num}" return f"{word}-{num}"
elif style == "rolling": elif style == "rolling":
# Kali/Arch: just a word, no suffix # Kali/Arch: just a word, no suffix
return f"{word}-{random.choice(_NAME_WORDS)}" return f"{word}-{random.choice(_NAME_WORDS)}" # nosec B311
else: else:
# Debian/Ubuntu: SRV-WORD-nn # Debian/Ubuntu: SRV-WORD-nn
return f"SRV-{word.upper()}-{num}" return f"SRV-{word.upper()}-{num}"
@@ -122,7 +122,7 @@ def get_distro(slug: str) -> DistroProfile:
def random_distro() -> DistroProfile: def random_distro() -> DistroProfile:
return random.choice(list(DISTROS.values())) return random.choice(list(DISTROS.values())) # nosec B311
def all_distros() -> dict[str, DistroProfile]: def all_distros() -> dict[str, DistroProfile]:

15
decnet/engine/__init__.py Normal file
View File

@@ -0,0 +1,15 @@
from decnet.engine.deployer import (
COMPOSE_FILE,
_compose_with_retry,
deploy,
status,
teardown,
)
__all__ = [
"COMPOSE_FILE",
"_compose_with_retry",
"deploy",
"status",
"teardown",
]

View File

@@ -2,7 +2,8 @@
Deploy, teardown, and status via Docker SDK + subprocess docker compose. Deploy, teardown, and status via Docker SDK + subprocess docker compose.
""" """
import subprocess import shutil
import subprocess # nosec B404
import time import time
from pathlib import Path from pathlib import Path
@@ -27,11 +28,32 @@ from decnet.network import (
console = Console() console = Console()
COMPOSE_FILE = Path("decnet-compose.yml") COMPOSE_FILE = Path("decnet-compose.yml")
_CANONICAL_LOGGING = Path(__file__).parent.parent.parent / "templates" / "decnet_logging.py"
def _compose(*args: str, compose_file: Path = COMPOSE_FILE) -> None: def _sync_logging_helper(config: DecnetConfig) -> None:
"""Copy the canonical decnet_logging.py into every active template build context."""
from decnet.services.registry import get_service
seen: set[Path] = set()
for decky in config.deckies:
for svc_name in decky.services:
svc = get_service(svc_name)
if svc is None:
continue
ctx = svc.dockerfile_context()
if ctx is None or ctx in seen:
continue
seen.add(ctx)
dest = ctx / "decnet_logging.py"
if not dest.exists() or dest.read_bytes() != _CANONICAL_LOGGING.read_bytes():
shutil.copy2(_CANONICAL_LOGGING, dest)
def _compose(*args: str, compose_file: Path = COMPOSE_FILE, env: dict | None = None) -> None:
import os
cmd = ["docker", "compose", "-f", str(compose_file), *args] cmd = ["docker", "compose", "-f", str(compose_file), *args]
subprocess.run(cmd, check=True) merged = {**os.environ, **(env or {})}
subprocess.run(cmd, check=True, env=merged) # nosec B603
_PERMANENT_ERRORS = ( _PERMANENT_ERRORS = (
@@ -48,12 +70,15 @@ def _compose_with_retry(
compose_file: Path = COMPOSE_FILE, compose_file: Path = COMPOSE_FILE,
retries: int = 3, retries: int = 3,
delay: float = 5.0, delay: float = 5.0,
env: dict | None = None,
) -> None: ) -> None:
"""Run a docker compose command, retrying on transient failures.""" """Run a docker compose command, retrying on transient failures."""
import os
last_exc: subprocess.CalledProcessError | None = None last_exc: subprocess.CalledProcessError | None = None
cmd = ["docker", "compose", "-f", str(compose_file), *args] cmd = ["docker", "compose", "-f", str(compose_file), *args]
merged = {**os.environ, **(env or {})}
for attempt in range(1, retries + 1): for attempt in range(1, retries + 1):
result = subprocess.run(cmd, capture_output=True, text=True) result = subprocess.run(cmd, capture_output=True, text=True, env=merged) # nosec B603
if result.returncode == 0: if result.returncode == 0:
if result.stdout: if result.stdout:
print(result.stdout, end="") print(result.stdout, end="")
@@ -80,10 +105,9 @@ def _compose_with_retry(
raise last_exc raise last_exc
def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False) -> None: def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False, parallel: bool = False) -> None:
client = docker.from_env() client = docker.from_env()
# --- Network setup ---
ip_list = [d.ip for d in config.deckies] ip_list = [d.ip for d in config.deckies]
decky_range = ips_to_range(ip_list) decky_range = ips_to_range(ip_list)
host_ip = get_host_ip(config.interface) host_ip = get_host_ip(config.interface)
@@ -110,7 +134,8 @@ def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False)
) )
setup_host_macvlan(config.interface, host_ip, decky_range) setup_host_macvlan(config.interface, host_ip, decky_range)
# --- Compose generation --- _sync_logging_helper(config)
compose_path = write_compose(config, COMPOSE_FILE) compose_path = write_compose(config, COMPOSE_FILE)
console.print(f"[bold cyan]Compose file written[/] → {compose_path}") console.print(f"[bold cyan]Compose file written[/] → {compose_path}")
@@ -118,46 +143,27 @@ def deploy(config: DecnetConfig, dry_run: bool = False, no_cache: bool = False)
console.print("[yellow]Dry run — no containers started.[/]") console.print("[yellow]Dry run — no containers started.[/]")
return return
# --- Save state before bring-up ---
save_state(config, compose_path) save_state(config, compose_path)
# --- Bring up --- build_env = {"DOCKER_BUILDKIT": "1"} if parallel else {}
console.print("[bold cyan]Building images and starting deckies...[/]") console.print("[bold cyan]Building images and starting deckies...[/]")
build_args = ["build"]
if no_cache: if no_cache:
_compose_with_retry("build", "--no-cache", compose_file=compose_path) build_args.append("--no-cache")
_compose_with_retry("up", "--build", "-d", compose_file=compose_path)
if parallel:
console.print("[bold cyan]Parallel build enabled — building all images concurrently...[/]")
_compose_with_retry(*build_args, compose_file=compose_path, env=build_env)
_compose_with_retry("up", "-d", compose_file=compose_path, env=build_env)
else:
if no_cache:
_compose_with_retry("build", "--no-cache", compose_file=compose_path)
_compose_with_retry("up", "--build", "-d", compose_file=compose_path)
# --- Status summary ---
_print_status(config) _print_status(config)
def _kill_api() -> None:
"""Find and kill any running DECNET API (uvicorn) or mutator processes."""
import psutil
import signal
import os
_killed: bool = False
for _proc in psutil.process_iter(['pid', 'name', 'cmdline']):
try:
_cmd = _proc.info['cmdline']
if not _cmd:
continue
if "uvicorn" in _cmd and "decnet.web.api:app" in _cmd:
console.print(f"[yellow]Stopping DECNET API (PID {_proc.info['pid']})...[/]")
os.kill(_proc.info['pid'], signal.SIGTERM)
_killed = True
elif "decnet.cli" in _cmd and "mutate" in _cmd and "--watch" in _cmd:
console.print(f"[yellow]Stopping DECNET Mutator Watcher (PID {_proc.info['pid']})...[/]")
os.kill(_proc.info['pid'], signal.SIGTERM)
_killed = True
except (psutil.NoSuchProcess, psutil.AccessDenied):
continue
if _killed:
console.print("[green]Background processes stopped.[/]")
def teardown(decky_id: str | None = None) -> None: def teardown(decky_id: str | None = None) -> None:
state = load_state() state = load_state()
if state is None: if state is None:
@@ -168,7 +174,6 @@ def teardown(decky_id: str | None = None) -> None:
client = docker.from_env() client = docker.from_env()
if decky_id: if decky_id:
# Bring down only the services matching this decky
svc_names = [f"{decky_id}-{svc}" for svc in [d.services for d in config.deckies if d.name == decky_id]] svc_names = [f"{decky_id}-{svc}" for svc in [d.services for d in config.deckies if d.name == decky_id]]
if not svc_names: if not svc_names:
console.print(f"[red]Decky '{decky_id}' not found in current deployment.[/]") console.print(f"[red]Decky '{decky_id}' not found in current deployment.[/]")
@@ -186,10 +191,7 @@ def teardown(decky_id: str | None = None) -> None:
teardown_host_macvlan(decky_range) teardown_host_macvlan(decky_range)
remove_macvlan_network(client) remove_macvlan_network(client)
clear_state() clear_state()
# Kill API when doing full teardown
_kill_api()
net_driver = "IPvlan" if config.ipvlan else "MACVLAN" net_driver = "IPvlan" if config.ipvlan else "MACVLAN"
console.print(f"[green]All deckies torn down. {net_driver} network removed.[/]") console.print(f"[green]All deckies torn down. {net_driver} network removed.[/]")
@@ -210,7 +212,7 @@ def status() -> None:
table.add_column("Hostname") table.add_column("Hostname")
table.add_column("Status") table.add_column("Status")
running = {c.name: c.status for c in client.containers.list(all=True)} running = {c.name: c.status for c in client.containers.list(all=True, ignore_removed=True)}
for decky in config.deckies: for decky in config.deckies:
statuses = [] statuses = []

View File

@@ -9,14 +9,56 @@ _ROOT: Path = Path(__file__).parent.parent.absolute()
load_dotenv(_ROOT / ".env.local") load_dotenv(_ROOT / ".env.local")
load_dotenv(_ROOT / ".env") load_dotenv(_ROOT / ".env")
def _port(name: str, default: int) -> int:
raw = os.environ.get(name, str(default))
try:
value = int(raw)
except ValueError:
raise ValueError(f"Environment variable '{name}' must be an integer, got '{raw}'.")
if not (1 <= value <= 65535):
raise ValueError(f"Environment variable '{name}' must be 165535, got {value}.")
return value
def _require_env(name: str) -> str:
"""Return the env var value or raise at startup if it is unset or a known-bad default."""
_KNOWN_BAD = {"fallback-secret-key-change-me", "admin", "secret", "password", "changeme"}
value = os.environ.get(name)
if not value:
raise ValueError(
f"Required environment variable '{name}' is not set. "
f"Set it in .env.local or export it before starting DECNET."
)
if any(k.startswith("PYTEST") for k in os.environ):
return value
if value.lower() in _KNOWN_BAD:
raise ValueError(
f"Environment variable '{name}' is set to an insecure default ('{value}'). "
f"Choose a strong, unique value before starting DECNET."
)
return value
# API Options # API Options
DECNET_API_HOST: str = os.environ.get("DECNET_API_HOST", "0.0.0.0") DECNET_API_HOST: str = os.environ.get("DECNET_API_HOST", "0.0.0.0") # nosec B104
DECNET_API_PORT: int = int(os.environ.get("DECNET_API_PORT", "8000")) DECNET_API_PORT: int = _port("DECNET_API_PORT", 8000)
DECNET_JWT_SECRET: str = os.environ.get("DECNET_JWT_SECRET", "fallback-secret-key-change-me") DECNET_JWT_SECRET: str = _require_env("DECNET_JWT_SECRET")
DECNET_INGEST_LOG_FILE: str | None = os.environ.get("DECNET_INGEST_LOG_FILE", "/var/log/decnet/decnet.log") DECNET_INGEST_LOG_FILE: str | None = os.environ.get("DECNET_INGEST_LOG_FILE", "/var/log/decnet/decnet.log")
# Web Dashboard Options # Web Dashboard Options
DECNET_WEB_HOST: str = os.environ.get("DECNET_WEB_HOST", "0.0.0.0") DECNET_WEB_HOST: str = os.environ.get("DECNET_WEB_HOST", "0.0.0.0") # nosec B104
DECNET_WEB_PORT: int = int(os.environ.get("DECNET_WEB_PORT", "8080")) DECNET_WEB_PORT: int = _port("DECNET_WEB_PORT", 8080)
DECNET_ADMIN_USER: str = os.environ.get("DECNET_ADMIN_USER", "admin") DECNET_ADMIN_USER: str = os.environ.get("DECNET_ADMIN_USER", "admin")
DECNET_ADMIN_PASSWORD: str = os.environ.get("DECNET_ADMIN_PASSWORD", "admin") DECNET_ADMIN_PASSWORD: str = os.environ.get("DECNET_ADMIN_PASSWORD", "admin")
DECNET_DEVELOPER: bool = os.environ.get("DECNET_DEVELOPER", "False").lower() == "true"
# CORS — comma-separated list of allowed origins for the web dashboard API.
# Defaults to the configured web host/port. Override with DECNET_CORS_ORIGINS if needed.
# Example: DECNET_CORS_ORIGINS=http://192.168.1.50:9090,https://dashboard.example.com
_web_hostname: str = "localhost" if DECNET_WEB_HOST in ("0.0.0.0", "127.0.0.1", "::") else DECNET_WEB_HOST # nosec B104
_cors_default: str = f"http://{_web_hostname}:{DECNET_WEB_PORT}"
_cors_raw: str = os.environ.get("DECNET_CORS_ORIGINS", _cors_default)
DECNET_CORS_ORIGINS: list[str] = [o.strip() for o in _cors_raw.split(",") if o.strip()]

179
decnet/fleet.py Normal file
View File

@@ -0,0 +1,179 @@
"""
Fleet builder — shared logic for constructing DeckyConfig lists.
Used by both the CLI and the web API router to build deckies from
flags or INI config. Lives here (not in cli.py) so that the web layer
and the mutation engine can import it without depending on the CLI.
"""
import random
from typing import Optional
from decnet.archetypes import Archetype, get_archetype
from decnet.config import DeckyConfig, random_hostname
from decnet.distros import all_distros, get_distro, random_distro
from decnet.ini_loader import IniConfig
from decnet.services.registry import all_services
def all_service_names() -> list[str]:
"""Return all registered service names from the live plugin registry."""
return sorted(all_services().keys())
def resolve_distros(
distros_explicit: list[str] | None,
randomize_distros: bool,
n: int,
archetype: Archetype | None = None,
) -> list[str]:
"""Return a list of n distro slugs based on flags or archetype preference."""
if distros_explicit:
return [distros_explicit[i % len(distros_explicit)] for i in range(n)]
if randomize_distros:
return [random_distro().slug for _ in range(n)]
if archetype:
pool = archetype.preferred_distros
return [pool[i % len(pool)] for i in range(n)]
slugs = list(all_distros().keys())
return [slugs[i % len(slugs)] for i in range(n)]
def build_deckies(
n: int,
ips: list[str],
services_explicit: list[str] | None,
randomize_services: bool,
distros_explicit: list[str] | None = None,
randomize_distros: bool = False,
archetype: Archetype | None = None,
mutate_interval: Optional[int] = None,
) -> list[DeckyConfig]:
"""Build a list of DeckyConfigs from CLI-style flags."""
deckies = []
used_combos: set[frozenset] = set()
distro_slugs = resolve_distros(distros_explicit, randomize_distros, n, archetype)
for i, ip in enumerate(ips):
name = f"decky-{i + 1:02d}"
distro = get_distro(distro_slugs[i])
hostname = random_hostname(distro.slug)
if services_explicit:
svc_list = services_explicit
elif archetype:
svc_list = list(archetype.services)
elif randomize_services:
svc_pool = all_service_names()
attempts = 0
while True:
count = random.randint(1, min(3, len(svc_pool))) # nosec B311
chosen = frozenset(random.sample(svc_pool, count)) # nosec B311
attempts += 1
if chosen not in used_combos or attempts > 20:
break
svc_list = list(chosen)
used_combos.add(chosen)
else:
raise ValueError("Provide services_explicit, archetype, or randomize_services=True.")
deckies.append(
DeckyConfig(
name=name,
ip=ip,
services=svc_list,
distro=distro.slug,
base_image=distro.image,
build_base=distro.build_base,
hostname=hostname,
archetype=archetype.slug if archetype else None,
nmap_os=archetype.nmap_os if archetype else "linux",
mutate_interval=mutate_interval,
)
)
return deckies
def build_deckies_from_ini(
ini: IniConfig,
subnet_cidr: str,
gateway: str,
host_ip: str,
randomize: bool,
cli_mutate_interval: int | None = None,
) -> list[DeckyConfig]:
"""Build DeckyConfig list from an IniConfig, auto-allocating missing IPs."""
from ipaddress import IPv4Address, IPv4Network
import time
now = time.time()
explicit_ips: set[IPv4Address] = {
IPv4Address(s.ip) for s in ini.deckies if s.ip
}
net = IPv4Network(subnet_cidr, strict=False)
reserved = {
net.network_address,
net.broadcast_address,
IPv4Address(gateway),
IPv4Address(host_ip),
} | explicit_ips
auto_pool = (str(addr) for addr in net.hosts() if addr not in reserved)
deckies: list[DeckyConfig] = []
for spec in ini.deckies:
arch: Archetype | None = None
if spec.archetype:
arch = get_archetype(spec.archetype)
distro_pool = arch.preferred_distros if arch else list(all_distros().keys())
distro = get_distro(distro_pool[len(deckies) % len(distro_pool)])
hostname = random_hostname(distro.slug)
ip = spec.ip or next(auto_pool, None)
if ip is None:
raise ValueError(f"Not enough free IPs in {subnet_cidr} while assigning IP for '{spec.name}'.")
if spec.services:
known = set(all_service_names())
unknown = [s for s in spec.services if s not in known]
if unknown:
raise ValueError(
f"Unknown service(s) in [{spec.name}]: {unknown}. "
f"Available: {all_service_names()}"
)
svc_list = spec.services
elif arch:
svc_list = list(arch.services)
elif randomize:
svc_pool = all_service_names()
count = random.randint(1, min(3, len(svc_pool))) # nosec B311
svc_list = random.sample(svc_pool, count) # nosec B311
else:
raise ValueError(
f"Decky '[{spec.name}]' has no services= in config. "
"Add services=, archetype=, or use --randomize-services."
)
resolved_nmap_os = spec.nmap_os or (arch.nmap_os if arch else "linux")
decky_mutate_interval = cli_mutate_interval
if decky_mutate_interval is None:
decky_mutate_interval = spec.mutate_interval if spec.mutate_interval is not None else ini.mutate_interval
deckies.append(DeckyConfig(
name=spec.name,
ip=ip,
services=svc_list,
distro=distro.slug,
base_image=distro.image,
build_base=distro.build_base,
hostname=hostname,
archetype=arch.slug if arch else None,
service_config=spec.service_config,
nmap_os=resolved_nmap_os,
mutate_interval=decky_mutate_interval,
last_mutated=now,
))
return deckies

View File

@@ -6,7 +6,6 @@ Format:
net=192.168.1.0/24 net=192.168.1.0/24
gw=192.168.1.1 gw=192.168.1.1
interface=wlp6s0 interface=wlp6s0
log_target=192.168.1.5:5140 # optional
[hostname-1] [hostname-1]
ip=192.168.1.82 # optional ip=192.168.1.82 # optional
@@ -71,7 +70,6 @@ class IniConfig:
subnet: str | None = None subnet: str | None = None
gateway: str | None = None gateway: str | None = None
interface: str | None = None interface: str | None = None
log_target: str | None = None
mutate_interval: int | None = None mutate_interval: int | None = None
deckies: list[DeckySpec] = field(default_factory=list) deckies: list[DeckySpec] = field(default_factory=list)
custom_services: list[CustomServiceSpec] = field(default_factory=list) custom_services: list[CustomServiceSpec] = field(default_factory=list)
@@ -117,7 +115,6 @@ def _parse_configparser(cp: configparser.ConfigParser) -> IniConfig:
cfg.subnet = g.get("net") cfg.subnet = g.get("net")
cfg.gateway = g.get("gw") cfg.gateway = g.get("gw")
cfg.interface = g.get("interface") cfg.interface = g.get("interface")
cfg.log_target = g.get("log_target") or g.get("log-target")
from decnet.services.registry import all_services from decnet.services.registry import all_services
known_services = set(all_services().keys()) known_services = set(all_services().keys())

View File

@@ -49,11 +49,10 @@ def _get_logger() -> logging.Logger:
def write_syslog(line: str) -> None: def write_syslog(line: str) -> None:
"""Write a single RFC 5424 syslog line to the rotating log file.""" """Write a single RFC 5424 syslog line to the rotating log file."""
try: try:
_get_logger().info(line) _get_logger().info(line)
except Exception: except Exception: # nosec B110
pass pass
def get_log_path() -> Path: def get_log_path() -> Path:
"""Return the configured log file path (for tests/inspection).""" """Return the configured log file path (for tests/inspection)."""
return Path(os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE)) return Path(os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE))

View File

@@ -0,0 +1,3 @@
from decnet.mutator.engine import mutate_all, mutate_decky, run_watch_loop
__all__ = ["mutate_all", "mutate_decky", "run_watch_loop"]

View File

@@ -4,43 +4,21 @@ Handles dynamic rotation of exposed honeypot services over time.
""" """
import random import random
import subprocess
import time import time
from pathlib import Path
from typing import Optional from typing import Optional
from rich.console import Console from rich.console import Console
from decnet.archetypes import get_archetype from decnet.archetypes import get_archetype
from decnet.cli import _all_service_names from decnet.fleet import all_service_names
from decnet.composer import write_compose from decnet.composer import write_compose
from decnet.config import DeckyConfig, load_state, save_state from decnet.config import DeckyConfig, load_state, save_state
from decnet.deployer import COMPOSE_FILE from decnet.engine import COMPOSE_FILE, _compose_with_retry
import subprocess # nosec B404
console = Console() console = Console()
def _compose_with_retry(
*args: str,
compose_file: Path = COMPOSE_FILE,
retries: int = 3,
delay: float = 5.0,
) -> None:
"""Run a docker compose command, retrying on transient failures."""
last_exc: subprocess.CalledProcessError | None = None
cmd = ["docker", "compose", "-f", str(compose_file), *args]
for attempt in range(1, retries + 1):
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode == 0:
if result.stdout:
print(result.stdout, end="")
return
last_exc = subprocess.CalledProcessError(
result.returncode, cmd, result.stdout, result.stderr
)
if attempt < retries:
time.sleep(delay)
delay *= 2
raise last_exc
def mutate_decky(decky_name: str) -> bool: def mutate_decky(decky_name: str) -> bool:
""" """
@@ -59,27 +37,25 @@ def mutate_decky(decky_name: str) -> bool:
console.print(f"[red]Decky '{decky_name}' not found in state.[/]") console.print(f"[red]Decky '{decky_name}' not found in state.[/]")
return False return False
# Determine allowed services pool
if decky.archetype: if decky.archetype:
try: try:
arch = get_archetype(decky.archetype) arch = get_archetype(decky.archetype)
svc_pool = list(arch.services) svc_pool = list(arch.services)
except ValueError: except ValueError:
svc_pool = _all_service_names() svc_pool = all_service_names()
else: else:
svc_pool = _all_service_names() svc_pool = all_service_names()
if not svc_pool: if not svc_pool:
console.print(f"[yellow]No services available for mutating '{decky_name}'.[/]") console.print(f"[yellow]No services available for mutating '{decky_name}'.[/]")
return False return False
# Prevent mutating to the exact same set if possible
current_services = set(decky.services) current_services = set(decky.services)
attempts = 0 attempts = 0
while True: while True:
count = random.randint(1, min(3, len(svc_pool))) count = random.randint(1, min(3, len(svc_pool))) # nosec B311
chosen = set(random.sample(svc_pool, count)) chosen = set(random.sample(svc_pool, count)) # nosec B311
attempts += 1 attempts += 1
if chosen != current_services or attempts > 20: if chosen != current_services or attempts > 20:
break break
@@ -87,15 +63,11 @@ def mutate_decky(decky_name: str) -> bool:
decky.services = list(chosen) decky.services = list(chosen)
decky.last_mutated = time.time() decky.last_mutated = time.time()
# Save new state
save_state(config, compose_path) save_state(config, compose_path)
# Regenerate compose file
write_compose(config, compose_path) write_compose(config, compose_path)
console.print(f"[cyan]Mutating '{decky_name}' to services: {', '.join(decky.services)}[/]") console.print(f"[cyan]Mutating '{decky_name}' to services: {', '.join(decky.services)}[/]")
# Bring up the new services and remove old orphans
try: try:
_compose_with_retry("up", "-d", "--remove-orphans", compose_file=compose_path) _compose_with_retry("up", "-d", "--remove-orphans", compose_file=compose_path)
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
@@ -104,6 +76,7 @@ def mutate_decky(decky_name: str) -> bool:
return True return True
def mutate_all(force: bool = False) -> None: def mutate_all(force: bool = False) -> None:
""" """
Check all deckies and mutate those that are due. Check all deckies and mutate those that are due.
@@ -116,7 +89,7 @@ def mutate_all(force: bool = False) -> None:
config, _ = state config, _ = state
now = time.time() now = time.time()
mutated_count = 0 mutated_count = 0
for decky in config.deckies: for decky in config.deckies:
interval_mins = decky.mutate_interval or config.mutate_interval interval_mins = decky.mutate_interval or config.mutate_interval
@@ -133,14 +106,11 @@ def mutate_all(force: bool = False) -> None:
success = mutate_decky(decky.name) success = mutate_decky(decky.name)
if success: if success:
mutated_count += 1 mutated_count += 1
# Re-load state for next decky just in case, but mutate_decky saves it.
# However, mutate_decky operates on its own loaded state.
# Since mutate_decky loads and saves the state, our loop over `config.deckies`
# has an outdated `last_mutated` if we don't reload. It's fine because we process one by one.
if mutated_count == 0 and not force: if mutated_count == 0 and not force:
console.print("[dim]No deckies are due for mutation.[/]") console.print("[dim]No deckies are due for mutation.[/]")
def run_watch_loop(poll_interval_secs: int = 10) -> None: def run_watch_loop(poll_interval_secs: int = 10) -> None:
"""Run an infinite loop checking for deckies that need mutation.""" """Run an infinite loop checking for deckies that need mutation."""
console.print(f"[green]DECNET Mutator Watcher started (polling every {poll_interval_secs}s).[/]") console.print(f"[green]DECNET Mutator Watcher started (polling every {poll_interval_secs}s).[/]")

View File

@@ -9,7 +9,7 @@ Handles:
""" """
import os import os
import subprocess import subprocess # nosec B404
from ipaddress import IPv4Address, IPv4Interface, IPv4Network from ipaddress import IPv4Address, IPv4Interface, IPv4Network
import docker import docker
@@ -24,7 +24,7 @@ HOST_IPVLAN_IFACE = "decnet_ipvlan0"
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
def _run(cmd: list[str], check: bool = True) -> subprocess.CompletedProcess: def _run(cmd: list[str], check: bool = True) -> subprocess.CompletedProcess:
return subprocess.run(cmd, capture_output=True, text=True, check=check) return subprocess.run(cmd, capture_output=True, text=True, check=check) # nosec B603 B404
def detect_interface() -> str: def detect_interface() -> str:

View File

@@ -5,17 +5,31 @@ Maps an nmap OS family slug to a dict of Linux kernel sysctls that, when applied
to a container's network namespace, make its TCP/IP stack behaviour resemble the to a container's network namespace, make its TCP/IP stack behaviour resemble the
claimed OS as closely as possible within the Linux kernel's constraints. claimed OS as closely as possible within the Linux kernel's constraints.
All sysctls listed here are network-namespace-scoped and safe to set per-container
without --privileged (beyond the NET_ADMIN capability already granted).
Primary discriminator leveraged by nmap: net.ipv4.ip_default_ttl (TTL) Primary discriminator leveraged by nmap: net.ipv4.ip_default_ttl (TTL)
Linux → 64 Linux → 64
Windows → 128 Windows → 128
BSD (FreeBSD/macOS)→ 64 (different TCP options, but same TTL as Linux) BSD (FreeBSD/macOS)→ 64 (different TCP options, but same TTL as Linux)
Embedded / network → 255 Embedded / network → 255
Secondary tuning (TCP behaviour): Secondary discriminators (nmap OPS / WIN / ECN / T2T6 probe groups):
net.ipv4.tcp_syn_retries SYN retransmits before giving up net.ipv4.tcp_syn_retries SYN retransmits before giving up
net.ipv4.tcp_timestamps TCP timestamp option (OPS probes); Windows = off
net.ipv4.tcp_window_scaling Window scale option; embedded/Cisco typically off
net.ipv4.tcp_sack Selective ACK option; absent on most embedded stacks
net.ipv4.tcp_ecn ECN negotiation; Linux offers (2), Windows off (0)
net.ipv4.ip_no_pmtu_disc DF bit in ICMP replies (IE probes); embedded on
net.ipv4.tcp_fin_timeout FIN_WAIT_2 seconds (T2T6 timing); Windows shorter
ICMP tuning (nmap IE / U1 probe groups):
net.ipv4.icmp_ratelimit Min ms between ICMP error replies; Windows = 0 (none)
net.ipv4.icmp_ratemask Bitmask of ICMP types subject to rate limiting
Note: net.core.rmem_default is a global (non-namespaced) sysctl and cannot be Note: net.core.rmem_default is a global (non-namespaced) sysctl and cannot be
set per-container without --privileged; it is intentionally excluded. set per-container without --privileged; TCP window size is already correct for
Windows (64240) from the kernel's default tcp_rmem settings.
""" """
from __future__ import annotations from __future__ import annotations
@@ -24,27 +38,69 @@ OS_SYSCTLS: dict[str, dict[str, str]] = {
"linux": { "linux": {
"net.ipv4.ip_default_ttl": "64", "net.ipv4.ip_default_ttl": "64",
"net.ipv4.tcp_syn_retries": "6", "net.ipv4.tcp_syn_retries": "6",
"net.ipv4.tcp_timestamps": "1",
"net.ipv4.tcp_window_scaling": "1",
"net.ipv4.tcp_sack": "1",
"net.ipv4.tcp_ecn": "2",
"net.ipv4.ip_no_pmtu_disc": "0",
"net.ipv4.tcp_fin_timeout": "60",
"net.ipv4.icmp_ratelimit": "1000",
"net.ipv4.icmp_ratemask": "6168",
}, },
"windows": { "windows": {
"net.ipv4.ip_default_ttl": "128", "net.ipv4.ip_default_ttl": "128",
"net.ipv4.tcp_syn_retries": "2", "net.ipv4.tcp_syn_retries": "2",
"net.ipv4.tcp_timestamps": "0",
"net.ipv4.tcp_window_scaling": "1",
"net.ipv4.tcp_sack": "1",
"net.ipv4.tcp_ecn": "0",
"net.ipv4.ip_no_pmtu_disc": "0",
"net.ipv4.tcp_fin_timeout": "30",
"net.ipv4.icmp_ratelimit": "0",
"net.ipv4.icmp_ratemask": "0",
}, },
"bsd": { "bsd": {
"net.ipv4.ip_default_ttl": "64", "net.ipv4.ip_default_ttl": "64",
"net.ipv4.tcp_syn_retries": "6", "net.ipv4.tcp_syn_retries": "6",
"net.ipv4.tcp_timestamps": "1",
"net.ipv4.tcp_window_scaling": "1",
"net.ipv4.tcp_sack": "1",
"net.ipv4.tcp_ecn": "0",
"net.ipv4.ip_no_pmtu_disc": "0",
"net.ipv4.tcp_fin_timeout": "60",
"net.ipv4.icmp_ratelimit": "250",
"net.ipv4.icmp_ratemask": "6168",
}, },
"embedded": { "embedded": {
"net.ipv4.ip_default_ttl": "255", "net.ipv4.ip_default_ttl": "255",
"net.ipv4.tcp_syn_retries": "3", "net.ipv4.tcp_syn_retries": "3",
"net.ipv4.tcp_timestamps": "0",
"net.ipv4.tcp_window_scaling": "0",
"net.ipv4.tcp_sack": "0",
"net.ipv4.tcp_ecn": "0",
"net.ipv4.ip_no_pmtu_disc": "1",
"net.ipv4.tcp_fin_timeout": "15",
"net.ipv4.icmp_ratelimit": "0",
"net.ipv4.icmp_ratemask": "0",
}, },
"cisco": { "cisco": {
"net.ipv4.ip_default_ttl": "255", "net.ipv4.ip_default_ttl": "255",
"net.ipv4.tcp_syn_retries": "2", "net.ipv4.tcp_syn_retries": "2",
"net.ipv4.tcp_timestamps": "0",
"net.ipv4.tcp_window_scaling": "0",
"net.ipv4.tcp_sack": "0",
"net.ipv4.tcp_ecn": "0",
"net.ipv4.ip_no_pmtu_disc": "1",
"net.ipv4.tcp_fin_timeout": "15",
"net.ipv4.icmp_ratelimit": "0",
"net.ipv4.icmp_ratemask": "0",
}, },
} }
_DEFAULT_OS = "linux" _DEFAULT_OS = "linux"
_REQUIRED_SYSCTLS: frozenset[str] = frozenset(OS_SYSCTLS["linux"].keys())
def get_os_sysctls(nmap_os: str) -> dict[str, str]: def get_os_sysctls(nmap_os: str) -> dict[str, str]:
"""Return the sysctl dict for *nmap_os*. Falls back to Linux on unknown slugs.""" """Return the sysctl dict for *nmap_os*. Falls back to Linux on unknown slugs."""
@@ -54,3 +110,4 @@ def get_os_sysctls(nmap_os: str) -> dict[str, str]:
def all_os_families() -> list[str]: def all_os_families() -> list[str]:
"""Return all registered nmap OS family slugs.""" """Return all registered nmap OS family slugs."""
return list(OS_SYSCTLS.keys()) return list(OS_SYSCTLS.keys())

View File

@@ -1,26 +1,35 @@
from pathlib import Path
from decnet.services.base import BaseService from decnet.services.base import BaseService
class ConpotService(BaseService): class ConpotService(BaseService):
"""ICS/SCADA honeypot covering Modbus (502), SNMP (161 UDP), and HTTP (80). """ICS/SCADA honeypot covering Modbus (502), SNMP (161 UDP), and HTTP (80).
Uses the official honeynet/conpot image which ships a default ICS profile Uses a custom build context wrapping the official honeynet/conpot image
that emulates a Siemens S7-200 PLC. to fix Modbus binding to port 502.
""" """
name = "conpot" name = "conpot"
ports = [502, 161, 80] ports = [502, 161, 80]
default_image = "honeynet/conpot" default_image = "build"
def compose_fragment(self, decky_name: str, log_target: str | None = None, service_cfg: dict | None = None) -> dict: def compose_fragment(self, decky_name: str, log_target: str | None = None, service_cfg: dict | None = None) -> dict:
env = {
"CONPOT_TEMPLATE": "default",
"NODE_NAME": decky_name,
}
if log_target:
env["LOG_TARGET"] = log_target
return { return {
"image": "honeynet/conpot", "build": {
"context": str(self.dockerfile_context()),
"args": {"BASE_IMAGE": "honeynet/conpot:latest"},
},
"container_name": f"{decky_name}-conpot", "container_name": f"{decky_name}-conpot",
"restart": "unless-stopped", "restart": "unless-stopped",
"environment": { "environment": env,
"CONPOT_TEMPLATE": "default",
},
} }
def dockerfile_context(self): def dockerfile_context(self):
return None return Path(__file__).parent.parent.parent / "templates" / "conpot"

View File

@@ -1,46 +0,0 @@
from pathlib import Path
from decnet.services.base import BaseService
TEMPLATES_DIR = Path(__file__).parent.parent.parent / "templates" / "real_ssh"
class RealSSHService(BaseService):
"""
Fully interactive OpenSSH server — no honeypot emulation.
Used for the deaddeck (entry-point machine). Attackers get a real shell.
Credentials are intentionally weak to invite exploitation.
service_cfg keys:
password Root password (default: "admin")
hostname Override container hostname
"""
name = "real_ssh"
ports = [22]
default_image = "build"
def compose_fragment(
self,
decky_name: str,
log_target: str | None = None,
service_cfg: dict | None = None,
) -> dict:
cfg = service_cfg or {}
env: dict = {
"SSH_ROOT_PASSWORD": cfg.get("password", "admin"),
}
if "hostname" in cfg:
env["SSH_HOSTNAME"] = cfg["hostname"]
return {
"build": {"context": str(TEMPLATES_DIR)},
"container_name": f"{decky_name}-real-ssh",
"restart": "unless-stopped",
"cap_add": ["NET_BIND_SERVICE"],
"environment": env,
}
def dockerfile_context(self) -> Path:
return TEMPLATES_DIR

View File

@@ -0,0 +1,43 @@
from pathlib import Path
from decnet.services.base import BaseService
# Reuses the same template as the smtp service — only difference is
# SMTP_OPEN_RELAY=1 in the environment, which enables the open relay persona.
_TEMPLATES_DIR = Path(__file__).parent.parent.parent / "templates" / "smtp"
class SMTPRelayService(BaseService):
"""SMTP open relay bait — accepts any RCPT TO and delivers messages."""
name = "smtp_relay"
ports = [25, 587]
default_image = "build"
def compose_fragment(
self,
decky_name: str,
log_target: str | None = None,
service_cfg: dict | None = None,
) -> dict:
cfg = service_cfg or {}
fragment: dict = {
"build": {"context": str(_TEMPLATES_DIR)},
"container_name": f"{decky_name}-smtp_relay",
"restart": "unless-stopped",
"cap_add": ["NET_BIND_SERVICE"],
"environment": {
"NODE_NAME": decky_name,
"SMTP_OPEN_RELAY": "1",
},
}
if log_target:
fragment["environment"]["LOG_TARGET"] = log_target
if "banner" in cfg:
fragment["environment"]["SMTP_BANNER"] = cfg["banner"]
if "mta" in cfg:
fragment["environment"]["SMTP_MTA"] = cfg["mta"]
return fragment
def dockerfile_context(self) -> Path:
return _TEMPLATES_DIR

View File

@@ -1,12 +1,26 @@
from pathlib import Path from pathlib import Path
from decnet.services.base import BaseService from decnet.services.base import BaseService
TEMPLATES_DIR = Path(__file__).parent.parent.parent / "templates" / "cowrie" TEMPLATES_DIR = Path(__file__).parent.parent.parent / "templates" / "ssh"
class SSHService(BaseService): class SSHService(BaseService):
"""
Interactive OpenSSH server for general-purpose deckies.
Replaced Cowrie emulation with a real sshd so fingerprinting tools and
experienced attackers cannot trivially identify the honeypot. Auth events,
sudo activity, and interactive commands are all forwarded to stdout as
RFC 5424 via the rsyslog bridge baked into the image.
service_cfg keys:
password Root password (default: "admin")
hostname Override container hostname
"""
name = "ssh" name = "ssh"
ports = [22, 2222] ports = [22]
default_image = "build" default_image = "build"
def compose_fragment( def compose_fragment(
@@ -17,28 +31,10 @@ class SSHService(BaseService):
) -> dict: ) -> dict:
cfg = service_cfg or {} cfg = service_cfg or {}
env: dict = { env: dict = {
"NODE_NAME": decky_name, "SSH_ROOT_PASSWORD": cfg.get("password", "admin"),
"COWRIE_HOSTNAME": decky_name,
"COWRIE_HONEYPOT_LISTEN_ENDPOINTS": "tcp:22:interface=0.0.0.0 tcp:2222:interface=0.0.0.0",
"COWRIE_SSH_LISTEN_ENDPOINTS": "tcp:22:interface=0.0.0.0 tcp:2222:interface=0.0.0.0",
} }
if log_target: if "hostname" in cfg:
host, port = log_target.rsplit(":", 1) env["SSH_HOSTNAME"] = cfg["hostname"]
env["COWRIE_OUTPUT_TCP_ENABLED"] = "true"
env["COWRIE_OUTPUT_TCP_HOST"] = host
env["COWRIE_OUTPUT_TCP_PORT"] = port
# Optional persona overrides
if "kernel_version" in cfg:
env["COWRIE_HONEYPOT_KERNEL_VERSION"] = cfg["kernel_version"]
if "kernel_build_string" in cfg:
env["COWRIE_HONEYPOT_KERNEL_BUILD_STRING"] = cfg["kernel_build_string"]
if "hardware_platform" in cfg:
env["COWRIE_HONEYPOT_HARDWARE_PLATFORM"] = cfg["hardware_platform"]
if "ssh_banner" in cfg:
env["COWRIE_SSH_VERSION"] = cfg["ssh_banner"]
if "users" in cfg:
env["COWRIE_USERDB_ENTRIES"] = cfg["users"]
return { return {
"build": {"context": str(TEMPLATES_DIR)}, "build": {"context": str(TEMPLATES_DIR)},

View File

@@ -1,31 +1,47 @@
from pathlib import Path
from decnet.services.base import BaseService from decnet.services.base import BaseService
TEMPLATES_DIR = Path(__file__).parent.parent.parent / "templates" / "telnet"
class TelnetService(BaseService): class TelnetService(BaseService):
"""
Real telnetd using busybox telnetd + rsyslog logging pipeline.
Replaced Cowrie emulation (which also started an SSH daemon on port 22)
with a real busybox telnetd so only port 23 is exposed and auth events
are logged as RFC 5424 via the same rsyslog bridge used by the SSH service.
service_cfg keys:
password Root password (default: "admin")
hostname Override container hostname
"""
name = "telnet" name = "telnet"
ports = [23] ports = [23]
default_image = "cowrie/cowrie" default_image = "build"
def compose_fragment(self, decky_name: str, log_target: str | None = None, service_cfg: dict | None = None) -> dict: def compose_fragment(
self,
decky_name: str,
log_target: str | None = None,
service_cfg: dict | None = None,
) -> dict:
cfg = service_cfg or {}
env: dict = { env: dict = {
"COWRIE_HONEYPOT_HOSTNAME": decky_name, "TELNET_ROOT_PASSWORD": cfg.get("password", "admin"),
"COWRIE_TELNET_ENABLED": "true",
"COWRIE_TELNET_LISTEN_ENDPOINTS": "tcp:23:interface=0.0.0.0",
# Disable SSH so this container is telnet-only
"COWRIE_SSH_ENABLED": "false",
} }
if log_target: if "hostname" in cfg:
host, port = log_target.rsplit(":", 1) env["TELNET_HOSTNAME"] = cfg["hostname"]
env["COWRIE_OUTPUT_TCP_ENABLED"] = "true"
env["COWRIE_OUTPUT_TCP_HOST"] = host
env["COWRIE_OUTPUT_TCP_PORT"] = port
return { return {
"image": "cowrie/cowrie", "build": {"context": str(TEMPLATES_DIR)},
"container_name": f"{decky_name}-telnet", "container_name": f"{decky_name}-telnet",
"restart": "unless-stopped", "restart": "unless-stopped",
"cap_add": ["NET_BIND_SERVICE"], "cap_add": ["NET_BIND_SERVICE"],
"environment": env, "environment": env,
} }
def dockerfile_context(self): def dockerfile_context(self) -> Path:
return None return TEMPLATES_DIR

View File

@@ -1,344 +1,72 @@
import uuid import asyncio
import logging
import os
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from datetime import timedelta
from typing import Any, AsyncGenerator, Optional from typing import Any, AsyncGenerator, Optional
import jwt from fastapi import FastAPI
from fastapi import Depends, FastAPI, HTTPException, Query, status, Request
from fastapi.responses import StreamingResponse
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from fastapi.security import OAuth2PasswordBearer
from pydantic import BaseModel, Field
from decnet.web.auth import ( from decnet.env import DECNET_CORS_ORIGINS, DECNET_DEVELOPER, DECNET_INGEST_LOG_FILE
ACCESS_TOKEN_EXPIRE_MINUTES, from decnet.web.dependencies import repo
ALGORITHM, from decnet.collector import log_collector_worker
SECRET_KEY,
create_access_token,
get_password_hash,
verify_password,
)
from decnet.web.sqlite_repository import SQLiteRepository
from decnet.web.ingester import log_ingestion_worker from decnet.web.ingester import log_ingestion_worker
from decnet.env import DECNET_ADMIN_USER, DECNET_ADMIN_PASSWORD from decnet.web.router import api_router
import asyncio
repo: SQLiteRepository = SQLiteRepository() log = logging.getLogger(__name__)
ingestion_task: Optional[asyncio.Task[Any]] = None ingestion_task: Optional[asyncio.Task[Any]] = None
collector_task: Optional[asyncio.Task[Any]] = None
@asynccontextmanager @asynccontextmanager
async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]: async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
global ingestion_task global ingestion_task, collector_task
await repo.initialize()
# Create default admin if no users exist for attempt in range(1, 6):
_admin_user: Optional[dict[str, Any]] = await repo.get_user_by_username(DECNET_ADMIN_USER) try:
if not _admin_user: await repo.initialize()
await repo.create_user( break
{ except Exception as exc:
"uuid": str(uuid.uuid4()), log.warning("DB init attempt %d/5 failed: %s", attempt, exc)
"username": DECNET_ADMIN_USER, if attempt == 5:
"password_hash": get_password_hash(DECNET_ADMIN_PASSWORD), log.error("DB failed to initialize after 5 attempts — startup may be degraded")
"role": "admin", await asyncio.sleep(0.5)
"must_change_password": True
}
)
# Start background ingestion task # Start background ingestion task
ingestion_task = asyncio.create_task(log_ingestion_worker(repo)) if ingestion_task is None or ingestion_task.done():
ingestion_task = asyncio.create_task(log_ingestion_worker(repo))
# Start Docker log collector (writes to log file; ingester reads from it)
_log_file = os.environ.get("DECNET_INGEST_LOG_FILE", DECNET_INGEST_LOG_FILE)
if _log_file and (collector_task is None or collector_task.done()):
collector_task = asyncio.create_task(log_collector_worker(_log_file))
else:
log.warning("DECNET_INGEST_LOG_FILE not set — Docker log collection disabled.")
yield yield
# Shutdown ingestion task # Shutdown background tasks
if ingestion_task: for task in (ingestion_task, collector_task):
ingestion_task.cancel() if task:
task.cancel()
app: FastAPI = FastAPI( app: FastAPI = FastAPI(
title="DECNET Web Dashboard API", title="DECNET Web Dashboard API",
version="1.0.0", version="1.0.0",
lifespan=lifespan lifespan=lifespan,
docs_url="/docs" if DECNET_DEVELOPER else None,
redoc_url="/redoc" if DECNET_DEVELOPER else None,
openapi_url="/openapi.json" if DECNET_DEVELOPER else None
) )
app.add_middleware( app.add_middleware(
CORSMiddleware, CORSMiddleware,
allow_origins=["*"], allow_origins=DECNET_CORS_ORIGINS,
allow_credentials=True, allow_credentials=False,
allow_methods=["*"], allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"],
allow_headers=["*"], allow_headers=["Authorization", "Content-Type", "Last-Event-ID"],
) )
# Include the modular API router
oauth2_scheme: OAuth2PasswordBearer = OAuth2PasswordBearer(tokenUrl="/api/v1/auth/login") app.include_router(api_router, prefix="/api/v1")
async def get_current_user(request: Request) -> str:
_credentials_exception: HTTPException = HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
# Extract token from header or query param
token: str | None = None
auth_header = request.headers.get("Authorization")
if auth_header and auth_header.startswith("Bearer "):
token = auth_header.split(" ")[1]
elif request.query_params.get("token"):
token = request.query_params.get("token")
if not token:
raise _credentials_exception
try:
_payload: dict[str, Any] = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
_user_uuid: Optional[str] = _payload.get("uuid")
if _user_uuid is None:
raise _credentials_exception
return _user_uuid
except jwt.PyJWTError:
raise _credentials_exception
class Token(BaseModel):
access_token: str
token_type: str
must_change_password: bool = False
class LoginRequest(BaseModel):
username: str
password: str
class ChangePasswordRequest(BaseModel):
old_password: str
new_password: str
class LogsResponse(BaseModel):
total: int
limit: int
offset: int
data: list[dict[str, Any]]
@app.post("/api/v1/auth/login", response_model=Token)
async def login(request: LoginRequest) -> dict[str, Any]:
_user: Optional[dict[str, Any]] = await repo.get_user_by_username(request.username)
if not _user or not verify_password(request.password, _user["password_hash"]):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect username or password",
headers={"WWW-Authenticate": "Bearer"},
)
_access_token_expires: timedelta = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
# Token uses uuid instead of sub
_access_token: str = create_access_token(
data={"uuid": _user["uuid"]}, expires_delta=_access_token_expires
)
return {
"access_token": _access_token,
"token_type": "bearer",
"must_change_password": bool(_user.get("must_change_password", False))
}
@app.post("/api/v1/auth/change-password")
async def change_password(request: ChangePasswordRequest, current_user: str = Depends(get_current_user)) -> dict[str, str]:
_user: Optional[dict[str, Any]] = await repo.get_user_by_uuid(current_user)
if not _user or not verify_password(request.old_password, _user["password_hash"]):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect old password",
)
_new_hash: str = get_password_hash(request.new_password)
await repo.update_user_password(current_user, _new_hash, must_change_password=False)
return {"message": "Password updated successfully"}
@app.get("/api/v1/logs", response_model=LogsResponse)
async def get_logs(
limit: int = Query(50, ge=1, le=1000),
offset: int = Query(0, ge=0),
search: Optional[str] = None,
current_user: str = Depends(get_current_user)
) -> dict[str, Any]:
_logs: list[dict[str, Any]] = await repo.get_logs(limit=limit, offset=offset, search=search)
_total: int = await repo.get_total_logs(search=search)
return {
"total": _total,
"limit": limit,
"offset": offset,
"data": _logs
}
class StatsResponse(BaseModel):
total_logs: int
unique_attackers: int
active_deckies: int
deployed_deckies: int
@app.get("/api/v1/stats", response_model=StatsResponse)
async def get_stats(current_user: str = Depends(get_current_user)) -> dict[str, Any]:
return await repo.get_stats_summary()
@app.get("/api/v1/deckies")
async def get_deckies(current_user: str = Depends(get_current_user)) -> list[dict[str, Any]]:
return await repo.get_deckies()
class MutateIntervalRequest(BaseModel):
mutate_interval: int | None
@app.post("/api/v1/deckies/{decky_name}/mutate")
async def api_mutate_decky(decky_name: str, current_user: str = Depends(get_current_user)) -> dict[str, str]:
from decnet.mutator import mutate_decky
success = mutate_decky(decky_name)
if success:
return {"message": f"Successfully mutated {decky_name}"}
raise HTTPException(status_code=404, detail=f"Decky {decky_name} not found or failed to mutate")
@app.put("/api/v1/deckies/{decky_name}/mutate-interval")
async def api_update_mutate_interval(decky_name: str, req: MutateIntervalRequest, current_user: str = Depends(get_current_user)) -> dict[str, str]:
from decnet.config import load_state, save_state
state = load_state()
if not state:
raise HTTPException(status_code=500, detail="No active deployment")
config, compose_path = state
decky = next((d for d in config.deckies if d.name == decky_name), None)
if not decky:
raise HTTPException(status_code=404, detail="Decky not found")
decky.mutate_interval = req.mutate_interval
save_state(config, compose_path)
return {"message": "Mutation interval updated"}
@app.get("/api/v1/stream")
async def stream_events(
request: Request,
last_event_id: int = Query(0, alias="lastEventId"),
search: Optional[str] = None,
current_user: str = Depends(get_current_user)
) -> StreamingResponse:
import json
import asyncio
async def event_generator() -> AsyncGenerator[str, None]:
# Start tracking from the provided ID, or current max if 0
last_id = last_event_id
if last_id == 0:
last_id = await repo.get_max_log_id()
stats_interval_sec = 10
loops_since_stats = 0
while True:
if await request.is_disconnected():
break
# Poll for new logs
new_logs = await repo.get_logs_after_id(last_id, limit=50, search=search)
if new_logs:
# Update last_id to the max id in the fetched batch
last_id = max(log["id"] for log in new_logs)
payload = json.dumps({"type": "logs", "data": new_logs})
yield f"event: message\ndata: {payload}\n\n"
# If we have new logs, stats probably changed, so force a stats update
loops_since_stats = stats_interval_sec
# Periodically poll for stats
if loops_since_stats >= stats_interval_sec:
stats = await repo.get_stats_summary()
payload = json.dumps({"type": "stats", "data": stats})
yield f"event: message\ndata: {payload}\n\n"
loops_since_stats = 0
loops_since_stats += 1
await asyncio.sleep(1)
return StreamingResponse(event_generator(), media_type="text/event-stream")
class DeployIniRequest(BaseModel):
ini_content: str = Field(..., min_length=5, max_length=512 * 1024)
@app.post("/api/v1/deckies/deploy")
async def api_deploy_deckies(req: DeployIniRequest, current_user: str = Depends(get_current_user)) -> dict[str, str]:
from decnet.ini_loader import load_ini_from_string
from decnet.cli import _build_deckies_from_ini
from decnet.config import load_state, DecnetConfig, DEFAULT_MUTATE_INTERVAL
from decnet.network import detect_interface, detect_subnet, get_host_ip
from decnet.deployer import deploy as _deploy
import logging
import os
try:
ini = load_ini_from_string(req.ini_content)
except Exception as e:
raise HTTPException(status_code=400, detail=f"Failed to parse INI: {e}")
state = load_state()
ingest_log_file = os.environ.get("DECNET_INGEST_LOG_FILE")
if state:
config, _ = state
subnet_cidr = ini.subnet or config.subnet
gateway = ini.gateway or config.gateway
host_ip = get_host_ip(config.interface)
randomize_services = False
# Always sync config log_file with current API ingestion target
if ingest_log_file:
config.log_file = ingest_log_file
else:
# If no state exists, we need to infer network details
iface = ini.interface or detect_interface()
subnet_cidr, gateway = ini.subnet, ini.gateway
if not subnet_cidr or not gateway:
detected_subnet, detected_gateway = detect_subnet(iface)
subnet_cidr = subnet_cidr or detected_subnet
gateway = gateway or detected_gateway
host_ip = get_host_ip(iface)
randomize_services = False
config = DecnetConfig(
mode="unihost",
interface=iface,
subnet=subnet_cidr,
gateway=gateway,
deckies=[],
log_target=ini.log_target,
log_file=ingest_log_file,
ipvlan=False,
mutate_interval=ini.mutate_interval or DEFAULT_MUTATE_INTERVAL
)
try:
new_decky_configs = _build_deckies_from_ini(
ini, subnet_cidr, gateway, host_ip, randomize_services, cli_mutate_interval=None
)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
# Merge deckies
existing_deckies_map = {d.name: d for d in config.deckies}
for new_decky in new_decky_configs:
existing_deckies_map[new_decky.name] = new_decky
config.deckies = list(existing_deckies_map.values())
# We call deploy(config) which regenerates docker-compose and runs `up -d --remove-orphans`.
try:
_deploy(config)
except Exception as e:
logging.getLogger("decnet.web.api").error(f"Deployment failed: {e}")
raise HTTPException(status_code=500, detail=f"Deployment failed: {e}")
return {"message": "Deckies deployed successfully"}

View File

@@ -12,7 +12,7 @@ ACCESS_TOKEN_EXPIRE_MINUTES: int = 1440
def verify_password(plain_password: str, hashed_password: str) -> bool: def verify_password(plain_password: str, hashed_password: str) -> bool:
return bcrypt.checkpw( return bcrypt.checkpw(
plain_password.encode("utf-8"), plain_password.encode("utf-8")[:72],
hashed_password.encode("utf-8") hashed_password.encode("utf-8")
) )
@@ -20,7 +20,7 @@ def verify_password(plain_password: str, hashed_password: str) -> bool:
def get_password_hash(password: str) -> str: def get_password_hash(password: str) -> str:
# Use a cost factor of 12 (default for passlib/bcrypt) # Use a cost factor of 12 (default for passlib/bcrypt)
_salt: bytes = bcrypt.gensalt(rounds=12) _salt: bytes = bcrypt.gensalt(rounds=12)
_hashed: bytes = bcrypt.hashpw(password.encode("utf-8"), _salt) _hashed: bytes = bcrypt.hashpw(password.encode("utf-8")[:72], _salt)
return _hashed.decode("utf-8") return _hashed.decode("utf-8")

75
decnet/web/db/models.py Normal file
View File

@@ -0,0 +1,75 @@
from datetime import datetime, timezone
from typing import Optional, Any, List
from sqlmodel import SQLModel, Field
from pydantic import BaseModel, Field as PydanticField
# --- Database Tables (SQLModel) ---
class User(SQLModel, table=True):
__tablename__ = "users"
uuid: str = Field(primary_key=True)
username: str = Field(index=True, unique=True)
password_hash: str
role: str = Field(default="viewer")
must_change_password: bool = Field(default=False)
class Log(SQLModel, table=True):
__tablename__ = "logs"
id: Optional[int] = Field(default=None, primary_key=True)
timestamp: datetime = Field(default_factory=lambda: datetime.now(timezone.utc), index=True)
decky: str = Field(index=True)
service: str = Field(index=True)
event_type: str = Field(index=True)
attacker_ip: str = Field(index=True)
raw_line: str
fields: str
msg: Optional[str] = None
class Bounty(SQLModel, table=True):
__tablename__ = "bounty"
id: Optional[int] = Field(default=None, primary_key=True)
timestamp: datetime = Field(default_factory=lambda: datetime.now(timezone.utc), index=True)
decky: str = Field(index=True)
service: str = Field(index=True)
attacker_ip: str = Field(index=True)
bounty_type: str = Field(index=True)
payload: str
# --- API Request/Response Models (Pydantic) ---
class Token(BaseModel):
access_token: str
token_type: str
must_change_password: bool = False
class LoginRequest(BaseModel):
username: str
password: str = PydanticField(..., max_length=72)
class ChangePasswordRequest(BaseModel):
old_password: str = PydanticField(..., max_length=72)
new_password: str = PydanticField(..., max_length=72)
class LogsResponse(BaseModel):
total: int
limit: int
offset: int
data: List[dict[str, Any]]
class BountyResponse(BaseModel):
total: int
limit: int
offset: int
data: List[dict[str, Any]]
class StatsResponse(BaseModel):
total_logs: int
unique_attackers: int
active_deckies: int
deployed_deckies: int
class MutateIntervalRequest(BaseModel):
mutate_interval: Optional[int] = None
class DeployIniRequest(BaseModel):
ini_content: str = PydanticField(..., min_length=5, max_length=512 * 1024)

View File

@@ -59,3 +59,24 @@ class BaseRepository(ABC):
async def update_user_password(self, uuid: str, password_hash: str, must_change_password: bool = False) -> None: async def update_user_password(self, uuid: str, password_hash: str, must_change_password: bool = False) -> None:
"""Update a user's password and change the must_change_password flag.""" """Update a user's password and change the must_change_password flag."""
pass pass
@abstractmethod
async def add_bounty(self, bounty_data: dict[str, Any]) -> None:
"""Add a new harvested artifact (bounty) to the database."""
pass
@abstractmethod
async def get_bounties(
self,
limit: int = 50,
offset: int = 0,
bounty_type: Optional[str] = None,
search: Optional[str] = None
) -> list[dict[str, Any]]:
"""Retrieve paginated bounty entries."""
pass
@abstractmethod
async def get_total_bounties(self, bounty_type: Optional[str] = None, search: Optional[str] = None) -> int:
"""Retrieve the total count of bounties, optionally filtered."""
pass

View File

@@ -0,0 +1,33 @@
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession, async_sessionmaker
from sqlalchemy import create_engine
from sqlmodel import SQLModel
# We need both sync and async engines for SQLite
# Sync for initialization (DDL) and async for standard queries
def get_async_engine(db_path: str):
# If it's a memory URI, don't add the extra slash that turns it into a relative file
prefix = "sqlite+aiosqlite:///"
if db_path.startswith("file:"):
prefix = "sqlite+aiosqlite:///"
return create_async_engine(f"{prefix}{db_path}", echo=False, connect_args={"uri": True})
def get_sync_engine(db_path: str):
prefix = "sqlite:///"
return create_engine(f"{prefix}{db_path}", echo=False, connect_args={"uri": True})
def init_db(db_path: str):
"""Synchronously create all tables."""
engine = get_sync_engine(db_path)
# Ensure WAL mode is set
with engine.connect() as conn:
conn.exec_driver_sql("PRAGMA journal_mode=WAL")
conn.exec_driver_sql("PRAGMA synchronous=NORMAL")
SQLModel.metadata.create_all(engine)
async def get_session(engine) -> AsyncSession:
async_session = async_sessionmaker(
engine, class_=AsyncSession, expire_on_commit=False
)
async with async_session() as session:
yield session

View File

@@ -0,0 +1,357 @@
import asyncio
import json
import uuid
from datetime import datetime
from typing import Any, Optional, List
from sqlalchemy import func, select, desc, asc, text, or_, update, literal_column
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
from decnet.config import load_state, _ROOT
from decnet.env import DECNET_ADMIN_USER, DECNET_ADMIN_PASSWORD
from decnet.web.auth import get_password_hash
from decnet.web.db.repository import BaseRepository
from decnet.web.db.models import User, Log, Bounty
from decnet.web.db.sqlite.database import get_async_engine, init_db
class SQLiteRepository(BaseRepository):
"""SQLite implementation using SQLModel and SQLAlchemy Async."""
def __init__(self, db_path: str = str(_ROOT / "decnet.db")) -> None:
self.db_path = db_path
self.engine = get_async_engine(db_path)
self.session_factory = async_sessionmaker(
self.engine, class_=AsyncSession, expire_on_commit=False
)
self._initialize_sync()
def _initialize_sync(self) -> None:
"""Initialize the database schema synchronously."""
init_db(self.db_path)
from decnet.web.db.sqlite.database import get_sync_engine
engine = get_sync_engine(self.db_path)
with engine.connect() as conn:
result = conn.execute(
text("SELECT uuid FROM users WHERE username = :u"),
{"u": DECNET_ADMIN_USER},
)
if not result.fetchone():
conn.execute(
text(
"INSERT INTO users (uuid, username, password_hash, role, must_change_password) "
"VALUES (:uuid, :u, :p, :r, :m)"
),
{
"uuid": str(uuid.uuid4()),
"u": DECNET_ADMIN_USER,
"p": get_password_hash(DECNET_ADMIN_PASSWORD),
"r": "admin",
"m": 1,
},
)
conn.commit()
async def initialize(self) -> None:
"""Async warm-up / verification."""
async with self.session_factory() as session:
await session.execute(text("SELECT 1"))
async def reinitialize(self) -> None:
"""Initialize the database schema asynchronously (useful for tests)."""
from sqlmodel import SQLModel
async with self.engine.begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all)
async with self.session_factory() as session:
result = await session.execute(
select(User).where(User.username == DECNET_ADMIN_USER)
)
if not result.scalar_one_or_none():
session.add(User(
uuid=str(uuid.uuid4()),
username=DECNET_ADMIN_USER,
password_hash=get_password_hash(DECNET_ADMIN_PASSWORD),
role="admin",
must_change_password=True,
))
await session.commit()
# ------------------------------------------------------------------ logs
async def add_log(self, log_data: dict[str, Any]) -> None:
data = log_data.copy()
if "fields" in data and isinstance(data["fields"], dict):
data["fields"] = json.dumps(data["fields"])
if "timestamp" in data and isinstance(data["timestamp"], str):
try:
data["timestamp"] = datetime.fromisoformat(
data["timestamp"].replace("Z", "+00:00")
)
except ValueError:
pass
async with self.session_factory() as session:
session.add(Log(**data))
await session.commit()
def _apply_filters(
self,
statement,
search: Optional[str],
start_time: Optional[str],
end_time: Optional[str],
):
import re
import shlex
if start_time:
statement = statement.where(Log.timestamp >= start_time)
if end_time:
statement = statement.where(Log.timestamp <= end_time)
if search:
try:
tokens = shlex.split(search)
except ValueError:
tokens = search.split()
core_fields = {
"decky": Log.decky,
"service": Log.service,
"event": Log.event_type,
"attacker": Log.attacker_ip,
"attacker-ip": Log.attacker_ip,
"attacker_ip": Log.attacker_ip,
}
for token in tokens:
if ":" in token:
key, val = token.split(":", 1)
if key in core_fields:
statement = statement.where(core_fields[key] == val)
else:
key_safe = re.sub(r"[^a-zA-Z0-9_]", "", key)
statement = statement.where(
text(f"json_extract(fields, '$.{key_safe}') = :val")
).params(val=val)
else:
lk = f"%{token}%"
statement = statement.where(
or_(
Log.raw_line.like(lk),
Log.decky.like(lk),
Log.service.like(lk),
Log.attacker_ip.like(lk),
)
)
return statement
async def get_logs(
self,
limit: int = 50,
offset: int = 0,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
) -> List[dict]:
statement = (
select(Log)
.order_by(desc(Log.timestamp))
.offset(offset)
.limit(limit)
)
statement = self._apply_filters(statement, search, start_time, end_time)
async with self.session_factory() as session:
results = await session.execute(statement)
return [log.model_dump(mode='json') for log in results.scalars().all()]
async def get_max_log_id(self) -> int:
async with self.session_factory() as session:
result = await session.execute(select(func.max(Log.id)))
val = result.scalar()
return val if val is not None else 0
async def get_logs_after_id(
self,
last_id: int,
limit: int = 50,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
) -> List[dict]:
statement = (
select(Log).where(Log.id > last_id).order_by(asc(Log.id)).limit(limit)
)
statement = self._apply_filters(statement, search, start_time, end_time)
async with self.session_factory() as session:
results = await session.execute(statement)
return [log.model_dump(mode='json') for log in results.scalars().all()]
async def get_total_logs(
self,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
) -> int:
statement = select(func.count()).select_from(Log)
statement = self._apply_filters(statement, search, start_time, end_time)
async with self.session_factory() as session:
result = await session.execute(statement)
return result.scalar() or 0
async def get_log_histogram(
self,
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
interval_minutes: int = 15,
) -> List[dict]:
bucket_seconds = interval_minutes * 60
bucket_expr = literal_column(
f"datetime((strftime('%s', timestamp) / {bucket_seconds}) * {bucket_seconds}, 'unixepoch')"
).label("bucket_time")
statement = select(bucket_expr, func.count().label("count")).select_from(Log)
statement = self._apply_filters(statement, search, start_time, end_time)
statement = statement.group_by(literal_column("bucket_time")).order_by(
literal_column("bucket_time")
)
async with self.session_factory() as session:
results = await session.execute(statement)
return [{"time": r[0], "count": r[1]} for r in results.all()]
async def get_stats_summary(self) -> dict[str, Any]:
async with self.session_factory() as session:
total_logs = (
await session.execute(select(func.count()).select_from(Log))
).scalar() or 0
unique_attackers = (
await session.execute(
select(func.count(func.distinct(Log.attacker_ip)))
)
).scalar() or 0
active_deckies = (
await session.execute(
select(func.count(func.distinct(Log.decky)))
)
).scalar() or 0
_state = await asyncio.to_thread(load_state)
deployed_deckies = len(_state[0].deckies) if _state else 0
return {
"total_logs": total_logs,
"unique_attackers": unique_attackers,
"active_deckies": active_deckies,
"deployed_deckies": deployed_deckies,
}
async def get_deckies(self) -> List[dict]:
_state = await asyncio.to_thread(load_state)
return [_d.model_dump() for _d in _state[0].deckies] if _state else []
# ------------------------------------------------------------------ users
async def get_user_by_username(self, username: str) -> Optional[dict]:
async with self.session_factory() as session:
result = await session.execute(
select(User).where(User.username == username)
)
user = result.scalar_one_or_none()
return user.model_dump() if user else None
async def get_user_by_uuid(self, uuid: str) -> Optional[dict]:
async with self.session_factory() as session:
result = await session.execute(
select(User).where(User.uuid == uuid)
)
user = result.scalar_one_or_none()
return user.model_dump() if user else None
async def create_user(self, user_data: dict[str, Any]) -> None:
async with self.session_factory() as session:
session.add(User(**user_data))
await session.commit()
async def update_user_password(
self, uuid: str, password_hash: str, must_change_password: bool = False
) -> None:
async with self.session_factory() as session:
await session.execute(
update(User)
.where(User.uuid == uuid)
.values(
password_hash=password_hash,
must_change_password=must_change_password,
)
)
await session.commit()
# ---------------------------------------------------------------- bounties
async def add_bounty(self, bounty_data: dict[str, Any]) -> None:
data = bounty_data.copy()
if "payload" in data and isinstance(data["payload"], dict):
data["payload"] = json.dumps(data["payload"])
async with self.session_factory() as session:
session.add(Bounty(**data))
await session.commit()
def _apply_bounty_filters(self, statement, bounty_type: Optional[str], search: Optional[str]):
if bounty_type:
statement = statement.where(Bounty.bounty_type == bounty_type)
if search:
lk = f"%{search}%"
statement = statement.where(
or_(
Bounty.decky.like(lk),
Bounty.service.like(lk),
Bounty.attacker_ip.like(lk),
Bounty.payload.like(lk),
)
)
return statement
async def get_bounties(
self,
limit: int = 50,
offset: int = 0,
bounty_type: Optional[str] = None,
search: Optional[str] = None,
) -> List[dict]:
statement = (
select(Bounty)
.order_by(desc(Bounty.timestamp))
.offset(offset)
.limit(limit)
)
statement = self._apply_bounty_filters(statement, bounty_type, search)
async with self.session_factory() as session:
results = await session.execute(statement)
final = []
for item in results.scalars().all():
d = item.model_dump(mode='json')
try:
d["payload"] = json.loads(d["payload"])
except (json.JSONDecodeError, TypeError):
pass
final.append(d)
return final
async def get_total_bounties(
self, bounty_type: Optional[str] = None, search: Optional[str] = None
) -> int:
statement = select(func.count()).select_from(Bounty)
statement = self._apply_bounty_filters(statement, bounty_type, search)
async with self.session_factory() as session:
result = await session.execute(statement)
return result.scalar() or 0

View File

@@ -0,0 +1,73 @@
from typing import Any, Optional
from pathlib import Path
import jwt
from fastapi import HTTPException, status, Request
from fastapi.security import OAuth2PasswordBearer
from decnet.web.auth import ALGORITHM, SECRET_KEY
from decnet.web.db.sqlite.repository import SQLiteRepository
# Root directory for database
_ROOT_DIR = Path(__file__).parent.parent.parent.absolute()
DB_PATH = _ROOT_DIR / "decnet.db"
# Shared repository instance
repo = SQLiteRepository(db_path=str(DB_PATH))
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/api/v1/auth/login")
async def get_stream_user(request: Request, token: Optional[str] = None) -> str:
"""Auth dependency for SSE endpoints — accepts Bearer header OR ?token= query param.
EventSource does not support custom headers, so the query-string fallback is intentional here only.
"""
_credentials_exception = HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
auth_header = request.headers.get("Authorization")
resolved: str | None = (
auth_header.split(" ", 1)[1]
if auth_header and auth_header.startswith("Bearer ")
else token
)
if not resolved:
raise _credentials_exception
try:
_payload: dict[str, Any] = jwt.decode(resolved, SECRET_KEY, algorithms=[ALGORITHM])
_user_uuid: Optional[str] = _payload.get("uuid")
if _user_uuid is None:
raise _credentials_exception
return _user_uuid
except jwt.PyJWTError:
raise _credentials_exception
async def get_current_user(request: Request) -> str:
_credentials_exception = HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
auth_header = request.headers.get("Authorization")
token: str | None = (
auth_header.split(" ", 1)[1]
if auth_header and auth_header.startswith("Bearer ")
else None
)
if not token:
raise _credentials_exception
try:
_payload: dict[str, Any] = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
_user_uuid: Optional[str] = _payload.get("uuid")
if _user_uuid is None:
raise _credentials_exception
return _user_uuid
except jwt.PyJWTError:
raise _credentials_exception

View File

@@ -5,7 +5,7 @@ import json
from typing import Any from typing import Any
from pathlib import Path from pathlib import Path
from decnet.web.repository import BaseRepository from decnet.web.db.repository import BaseRepository
logger: logging.Logger = logging.getLogger("decnet.web.ingester") logger: logging.Logger = logging.getLogger("decnet.web.ingester")
@@ -54,6 +54,7 @@ async def log_ingestion_worker(repo: BaseRepository) -> None:
try: try:
_log_data: dict[str, Any] = json.loads(_line.strip()) _log_data: dict[str, Any] = json.loads(_line.strip())
await repo.add_log(_log_data) await repo.add_log(_log_data)
await _extract_bounty(repo, _log_data)
except json.JSONDecodeError: except json.JSONDecodeError:
logger.error(f"Failed to decode JSON log line: {_line}") logger.error(f"Failed to decode JSON log line: {_line}")
continue continue
@@ -66,3 +67,28 @@ async def log_ingestion_worker(repo: BaseRepository) -> None:
await asyncio.sleep(5) await asyncio.sleep(5)
await asyncio.sleep(1) await asyncio.sleep(1)
async def _extract_bounty(repo: BaseRepository, log_data: dict[str, Any]) -> None:
"""Detect and extract valuable artifacts (bounties) from log entries."""
_fields = log_data.get("fields")
if not isinstance(_fields, dict):
return
# 1. Credentials (User/Pass)
_user = _fields.get("username")
_pass = _fields.get("password")
if _user and _pass:
await repo.add_bounty({
"decky": log_data.get("decky"),
"service": log_data.get("service"),
"attacker_ip": log_data.get("attacker_ip"),
"bounty_type": "credential",
"payload": {
"username": _user,
"password": _pass
}
})
# 2. Add more extractors here later (e.g. file hashes, crypto keys)

View File

@@ -0,0 +1,36 @@
from fastapi import APIRouter
from .auth.api_login import router as login_router
from .auth.api_change_pass import router as change_pass_router
from .logs.api_get_logs import router as logs_router
from .logs.api_get_histogram import router as histogram_router
from .bounty.api_get_bounties import router as bounty_router
from .stats.api_get_stats import router as stats_router
from .fleet.api_get_deckies import router as get_deckies_router
from .fleet.api_mutate_decky import router as mutate_decky_router
from .fleet.api_mutate_interval import router as mutate_interval_router
from .fleet.api_deploy_deckies import router as deploy_deckies_router
from .stream.api_stream_events import router as stream_router
api_router = APIRouter()
# Authentication
api_router.include_router(login_router)
api_router.include_router(change_pass_router)
# Logs & Analytics
api_router.include_router(logs_router)
api_router.include_router(histogram_router)
# Bounty Vault
api_router.include_router(bounty_router)
# Fleet Management
api_router.include_router(get_deckies_router)
api_router.include_router(mutate_decky_router)
api_router.include_router(mutate_interval_router)
api_router.include_router(deploy_deckies_router)
# Observability
api_router.include_router(stats_router)
api_router.include_router(stream_router)

View File

@@ -0,0 +1,27 @@
from typing import Any, Optional
from fastapi import APIRouter, Depends, HTTPException, status
from decnet.web.auth import get_password_hash, verify_password
from decnet.web.dependencies import get_current_user, repo
from decnet.web.db.models import ChangePasswordRequest
router = APIRouter()
@router.post(
"/auth/change-password",
tags=["Authentication"],
responses={401: {"description": "Invalid or expired token / wrong old password"}, 422: {"description": "Validation error"}},
)
async def change_password(request: ChangePasswordRequest, current_user: str = Depends(get_current_user)) -> dict[str, str]:
_user: Optional[dict[str, Any]] = await repo.get_user_by_uuid(current_user)
if not _user or not verify_password(request.old_password, _user["password_hash"]):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect old password",
)
_new_hash: str = get_password_hash(request.new_password)
await repo.update_user_password(current_user, _new_hash, must_change_password=False)
return {"message": "Password updated successfully"}

View File

@@ -0,0 +1,41 @@
from datetime import timedelta
from typing import Any, Optional
from fastapi import APIRouter, HTTPException, status
from decnet.web.auth import (
ACCESS_TOKEN_EXPIRE_MINUTES,
create_access_token,
verify_password,
)
from decnet.web.dependencies import repo
from decnet.web.db.models import LoginRequest, Token
router = APIRouter()
@router.post(
"/auth/login",
response_model=Token,
tags=["Authentication"],
responses={401: {"description": "Incorrect username or password"}, 422: {"description": "Validation error"}},
)
async def login(request: LoginRequest) -> dict[str, Any]:
_user: Optional[dict[str, Any]] = await repo.get_user_by_username(request.username)
if not _user or not verify_password(request.password, _user["password_hash"]):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect username or password",
headers={"WWW-Authenticate": "Bearer"},
)
_access_token_expires: timedelta = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
# Token uses uuid instead of sub
_access_token: str = create_access_token(
data={"uuid": _user["uuid"]}, expires_delta=_access_token_expires
)
return {
"access_token": _access_token,
"token_type": "bearer", # nosec B105
"must_change_password": bool(_user.get("must_change_password", False))
}

View File

@@ -0,0 +1,28 @@
from typing import Any, Optional
from fastapi import APIRouter, Depends, Query
from decnet.web.dependencies import get_current_user, repo
from decnet.web.db.models import BountyResponse
router = APIRouter()
@router.get("/bounty", response_model=BountyResponse, tags=["Bounty Vault"],
responses={401: {"description": "Not authenticated"}, 422: {"description": "Validation error"}},)
async def get_bounties(
limit: int = Query(50, ge=1, le=1000),
offset: int = Query(0, ge=0),
bounty_type: Optional[str] = None,
search: Optional[str] = None,
current_user: str = Depends(get_current_user)
) -> dict[str, Any]:
"""Retrieve collected bounties (harvested credentials, payloads, etc.)."""
_data = await repo.get_bounties(limit=limit, offset=offset, bounty_type=bounty_type, search=search)
_total = await repo.get_total_bounties(bounty_type=bounty_type, search=search)
return {
"total": _total,
"limit": limit,
"offset": offset,
"data": _data
}

View File

@@ -0,0 +1,79 @@
import logging
import os
from fastapi import APIRouter, Depends, HTTPException
from decnet.config import DEFAULT_MUTATE_INTERVAL, DecnetConfig, load_state
from decnet.engine import deploy as _deploy
from decnet.ini_loader import load_ini_from_string
from decnet.network import detect_interface, detect_subnet, get_host_ip
from decnet.web.dependencies import get_current_user
from decnet.web.db.models import DeployIniRequest
router = APIRouter()
@router.post("/deckies/deploy", tags=["Fleet Management"])
async def api_deploy_deckies(req: DeployIniRequest, current_user: str = Depends(get_current_user)) -> dict[str, str]:
from decnet.fleet import build_deckies_from_ini
try:
ini = load_ini_from_string(req.ini_content)
except Exception as e:
raise HTTPException(status_code=400, detail=f"Failed to parse INI: {e}")
state = load_state()
ingest_log_file = os.environ.get("DECNET_INGEST_LOG_FILE")
if state:
config, _ = state
subnet_cidr = ini.subnet or config.subnet
gateway = ini.gateway or config.gateway
host_ip = get_host_ip(config.interface)
randomize_services = False
# Always sync config log_file with current API ingestion target
if ingest_log_file:
config.log_file = ingest_log_file
else:
# If no state exists, we need to infer network details
iface = ini.interface or detect_interface()
subnet_cidr, gateway = ini.subnet, ini.gateway
if not subnet_cidr or not gateway:
detected_subnet, detected_gateway = detect_subnet(iface)
subnet_cidr = subnet_cidr or detected_subnet
gateway = gateway or detected_gateway
host_ip = get_host_ip(iface)
randomize_services = False
config = DecnetConfig(
mode="unihost",
interface=iface,
subnet=subnet_cidr,
gateway=gateway,
deckies=[],
log_file=ingest_log_file,
ipvlan=False,
mutate_interval=ini.mutate_interval or DEFAULT_MUTATE_INTERVAL
)
try:
new_decky_configs = build_deckies_from_ini(
ini, subnet_cidr, gateway, host_ip, randomize_services, cli_mutate_interval=None
)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
# Merge deckies
existing_deckies_map = {d.name: d for d in config.deckies}
for new_decky in new_decky_configs:
existing_deckies_map[new_decky.name] = new_decky
config.deckies = list(existing_deckies_map.values())
# We call deploy(config) which regenerates docker-compose and runs `up -d --remove-orphans`.
try:
_deploy(config)
except Exception as e:
logging.getLogger("decnet.web.api").exception("Deployment failed: %s", e)
raise HTTPException(status_code=500, detail="Deployment failed. Check server logs for details.")
return {"message": "Deckies deployed successfully"}

View File

@@ -0,0 +1,13 @@
from typing import Any
from fastapi import APIRouter, Depends
from decnet.web.dependencies import get_current_user, repo
router = APIRouter()
@router.get("/deckies", tags=["Fleet Management"],
responses={401: {"description": "Not authenticated"}, 422: {"description": "Validation error"}},)
async def get_deckies(current_user: str = Depends(get_current_user)) -> list[dict[str, Any]]:
return await repo.get_deckies()

View File

@@ -0,0 +1,17 @@
from fastapi import APIRouter, Depends, HTTPException, Path
from decnet.mutator import mutate_decky
from decnet.web.dependencies import get_current_user
router = APIRouter()
@router.post("/deckies/{decky_name}/mutate", tags=["Fleet Management"])
async def api_mutate_decky(
decky_name: str = Path(..., pattern=r"^[a-z0-9\-]{1,64}$"),
current_user: str = Depends(get_current_user),
) -> dict[str, str]:
success = mutate_decky(decky_name)
if success:
return {"message": f"Successfully mutated {decky_name}"}
raise HTTPException(status_code=404, detail=f"Decky {decky_name} not found or failed to mutate")

View File

@@ -0,0 +1,22 @@
from fastapi import APIRouter, Depends, HTTPException
from decnet.config import load_state, save_state
from decnet.web.dependencies import get_current_user
from decnet.web.db.models import MutateIntervalRequest
router = APIRouter()
@router.put("/deckies/{decky_name}/mutate-interval", tags=["Fleet Management"],
responses={401: {"description": "Not authenticated"}, 422: {"description": "Validation error"}},)
async def api_update_mutate_interval(decky_name: str, req: MutateIntervalRequest, current_user: str = Depends(get_current_user)) -> dict[str, str]:
state = load_state()
if not state:
raise HTTPException(status_code=500, detail="No active deployment")
config, compose_path = state
decky = next((d for d in config.deckies if d.name == decky_name), None)
if not decky:
raise HTTPException(status_code=404, detail="Decky not found")
decky.mutate_interval = req.mutate_interval
save_state(config, compose_path)
return {"message": "Mutation interval updated"}

View File

@@ -0,0 +1,19 @@
from typing import Any, Optional
from fastapi import APIRouter, Depends, Query
from decnet.web.dependencies import get_current_user, repo
router = APIRouter()
@router.get("/logs/histogram", tags=["Logs"],
responses={401: {"description": "Not authenticated"}, 422: {"description": "Validation error"}},)
async def get_logs_histogram(
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
interval_minutes: int = Query(15, ge=1),
current_user: str = Depends(get_current_user)
) -> list[dict[str, Any]]:
return await repo.get_log_histogram(search=search, start_time=start_time, end_time=end_time, interval_minutes=interval_minutes)

View File

@@ -0,0 +1,29 @@
from typing import Any, Optional
from fastapi import APIRouter, Depends, Query
from decnet.web.dependencies import get_current_user, repo
from decnet.web.db.models import LogsResponse
router = APIRouter()
_DATETIME_RE = r"^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}$"
@router.get("/logs", response_model=LogsResponse, tags=["Logs"])
async def get_logs(
limit: int = Query(50, ge=1, le=1000),
offset: int = Query(0, ge=0),
search: Optional[str] = Query(None, max_length=512),
start_time: Optional[str] = Query(None, pattern=_DATETIME_RE),
end_time: Optional[str] = Query(None, pattern=_DATETIME_RE),
current_user: str = Depends(get_current_user)
) -> dict[str, Any]:
_logs: list[dict[str, Any]] = await repo.get_logs(limit=limit, offset=offset, search=search, start_time=start_time, end_time=end_time)
_total: int = await repo.get_total_logs(search=search, start_time=start_time, end_time=end_time)
return {
"total": _total,
"limit": limit,
"offset": offset,
"data": _logs
}

View File

@@ -0,0 +1,14 @@
from typing import Any
from fastapi import APIRouter, Depends
from decnet.web.dependencies import get_current_user, repo
from decnet.web.db.models import StatsResponse
router = APIRouter()
@router.get("/stats", response_model=StatsResponse, tags=["Observability"],
responses={401: {"description": "Not authenticated"}, 422: {"description": "Validation error"}},)
async def get_stats(current_user: str = Depends(get_current_user)) -> dict[str, Any]:
return await repo.get_stats_summary()

View File

@@ -0,0 +1,75 @@
import json
import asyncio
import logging
from typing import AsyncGenerator, Optional
from fastapi import APIRouter, Depends, Query, Request
from fastapi.responses import StreamingResponse
from decnet.web.dependencies import get_stream_user, repo
log = logging.getLogger(__name__)
router = APIRouter()
@router.get("/stream", tags=["Observability"],
responses={401: {"description": "Not authenticated"}, 422: {"description": "Validation error"}},)
async def stream_events(
request: Request,
last_event_id: int = Query(0, alias="lastEventId"),
search: Optional[str] = None,
start_time: Optional[str] = None,
end_time: Optional[str] = None,
current_user: str = Depends(get_stream_user)
) -> StreamingResponse:
async def event_generator() -> AsyncGenerator[str, None]:
last_id = last_event_id
stats_interval_sec = 10
loops_since_stats = 0
try:
if last_id == 0:
last_id = await repo.get_max_log_id()
# Emit initial snapshot immediately so the client never needs to poll /stats
stats = await repo.get_stats_summary()
yield f"event: message\ndata: {json.dumps({'type': 'stats', 'data': stats})}\n\n"
histogram = await repo.get_log_histogram(
search=search, start_time=start_time,
end_time=end_time, interval_minutes=15,
)
yield f"event: message\ndata: {json.dumps({'type': 'histogram', 'data': histogram})}\n\n"
while True:
if await request.is_disconnected():
break
new_logs = await repo.get_logs_after_id(
last_id, limit=50, search=search,
start_time=start_time, end_time=end_time,
)
if new_logs:
last_id = max(entry["id"] for entry in new_logs)
yield f"event: message\ndata: {json.dumps({'type': 'logs', 'data': new_logs})}\n\n"
loops_since_stats = stats_interval_sec
if loops_since_stats >= stats_interval_sec:
stats = await repo.get_stats_summary()
yield f"event: message\ndata: {json.dumps({'type': 'stats', 'data': stats})}\n\n"
histogram = await repo.get_log_histogram(
search=search, start_time=start_time,
end_time=end_time, interval_minutes=15,
)
yield f"event: message\ndata: {json.dumps({'type': 'histogram', 'data': histogram})}\n\n"
loops_since_stats = 0
loops_since_stats += 1
await asyncio.sleep(1)
except asyncio.CancelledError:
pass
except Exception:
log.exception("SSE stream error for user %s", last_event_id)
yield f"event: error\ndata: {json.dumps({'type': 'error', 'message': 'Stream interrupted'})}\n\n"
return StreamingResponse(event_generator(), media_type="text/event-stream")

View File

@@ -1,222 +0,0 @@
import aiosqlite
from typing import Any, Optional
from decnet.web.repository import BaseRepository
from decnet.config import load_state, _ROOT
class SQLiteRepository(BaseRepository):
"""SQLite implementation of the DECNET web repository."""
def __init__(self, db_path: str = str(_ROOT / "decnet.db")) -> None:
self.db_path: str = db_path
async def initialize(self) -> None:
async with aiosqlite.connect(self.db_path) as _db:
# Logs table
await _db.execute("""
CREATE TABLE IF NOT EXISTS logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
decky TEXT,
service TEXT,
event_type TEXT,
attacker_ip TEXT,
raw_line TEXT,
fields TEXT,
msg TEXT
)
""")
try:
await _db.execute("ALTER TABLE logs ADD COLUMN fields TEXT")
except aiosqlite.OperationalError:
pass
try:
await _db.execute("ALTER TABLE logs ADD COLUMN msg TEXT")
except aiosqlite.OperationalError:
pass
# Users table (internal RBAC)
await _db.execute("""
CREATE TABLE IF NOT EXISTS users (
uuid TEXT PRIMARY KEY,
username TEXT UNIQUE,
password_hash TEXT,
role TEXT DEFAULT 'viewer',
must_change_password BOOLEAN DEFAULT 0
)
""")
try:
await _db.execute("ALTER TABLE users ADD COLUMN must_change_password BOOLEAN DEFAULT 0")
except aiosqlite.OperationalError:
pass # Column already exists
await _db.commit()
async def add_log(self, log_data: dict[str, Any]) -> None:
async with aiosqlite.connect(self.db_path) as _db:
_timestamp: Any = log_data.get("timestamp")
if _timestamp:
await _db.execute(
"INSERT INTO logs (timestamp, decky, service, event_type, attacker_ip, raw_line, fields, msg) VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
(
_timestamp,
log_data.get("decky"),
log_data.get("service"),
log_data.get("event_type"),
log_data.get("attacker_ip"),
log_data.get("raw_line"),
log_data.get("fields"),
log_data.get("msg")
)
)
else:
await _db.execute(
"INSERT INTO logs (decky, service, event_type, attacker_ip, raw_line, fields, msg) VALUES (?, ?, ?, ?, ?, ?, ?)",
(
log_data.get("decky"),
log_data.get("service"),
log_data.get("event_type"),
log_data.get("attacker_ip"),
log_data.get("raw_line"),
log_data.get("fields"),
log_data.get("msg")
)
)
await _db.commit()
async def get_logs(
self,
limit: int = 50,
offset: int = 0,
search: Optional[str] = None
) -> list[dict[str, Any]]:
_query: str = "SELECT * FROM logs"
_params: list[Any] = []
if search:
_query += " WHERE raw_line LIKE ? OR decky LIKE ? OR service LIKE ? OR attacker_ip LIKE ?"
_like_val: str = f"%{search}%"
_params.extend([_like_val, _like_val, _like_val, _like_val])
_query += " ORDER BY timestamp DESC LIMIT ? OFFSET ?"
_params.extend([limit, offset])
async with aiosqlite.connect(self.db_path) as _db:
_db.row_factory = aiosqlite.Row
async with _db.execute(_query, _params) as _cursor:
_rows: list[aiosqlite.Row] = await _cursor.fetchall()
return [dict(_row) for _row in _rows]
async def get_max_log_id(self) -> int:
_query: str = "SELECT MAX(id) as max_id FROM logs"
async with aiosqlite.connect(self.db_path) as _db:
_db.row_factory = aiosqlite.Row
async with _db.execute(_query) as _cursor:
_row: aiosqlite.Row | None = await _cursor.fetchone()
return _row["max_id"] if _row and _row["max_id"] is not None else 0
async def get_logs_after_id(self, last_id: int, limit: int = 50, search: Optional[str] = None) -> list[dict[str, Any]]:
_query: str = "SELECT * FROM logs WHERE id > ?"
_params: list[Any] = [last_id]
if search:
_query += " AND (raw_line LIKE ? OR decky LIKE ? OR service LIKE ? OR attacker_ip LIKE ?)"
_like_val: str = f"%{search}%"
_params.extend([_like_val, _like_val, _like_val, _like_val])
_query += " ORDER BY id ASC LIMIT ?"
_params.append(limit)
async with aiosqlite.connect(self.db_path) as _db:
_db.row_factory = aiosqlite.Row
async with _db.execute(_query, _params) as _cursor:
_rows: list[aiosqlite.Row] = await _cursor.fetchall()
return [dict(_row) for _row in _rows]
async def get_total_logs(self, search: Optional[str] = None) -> int:
_query: str = "SELECT COUNT(*) as total FROM logs"
_params: list[Any] = []
if search:
_query += " WHERE raw_line LIKE ? OR decky LIKE ? OR service LIKE ? OR attacker_ip LIKE ?"
_like_val: str = f"%{search}%"
_params.extend([_like_val, _like_val, _like_val, _like_val])
async with aiosqlite.connect(self.db_path) as _db:
_db.row_factory = aiosqlite.Row
async with _db.execute(_query, _params) as _cursor:
_row: Optional[aiosqlite.Row] = await _cursor.fetchone()
return _row["total"] if _row else 0
async def get_stats_summary(self) -> dict[str, Any]:
async with aiosqlite.connect(self.db_path) as _db:
_db.row_factory = aiosqlite.Row
async with _db.execute("SELECT COUNT(*) as total_logs FROM logs") as _cursor:
_row: Optional[aiosqlite.Row] = await _cursor.fetchone()
_total_logs: int = _row["total_logs"] if _row else 0
async with _db.execute("SELECT COUNT(DISTINCT attacker_ip) as unique_attackers FROM logs") as _cursor:
_row = await _cursor.fetchone()
_unique_attackers: int = _row["unique_attackers"] if _row else 0
# Active deckies are those that HAVE interaction logs
async with _db.execute("SELECT COUNT(DISTINCT decky) as active_deckies FROM logs") as _cursor:
_row = await _cursor.fetchone()
_active_deckies: int = _row["active_deckies"] if _row else 0
# Deployed deckies are all those in the state file
_state = load_state()
_deployed_deckies: int = 0
if _state:
_deployed_deckies = len(_state[0].deckies)
return {
"total_logs": _total_logs,
"unique_attackers": _unique_attackers,
"active_deckies": _active_deckies,
"deployed_deckies": _deployed_deckies
}
async def get_deckies(self) -> list[dict[str, Any]]:
_state = load_state()
if not _state:
return []
# We can also enrich this with interaction counts/last seen from DB
_deckies: list[dict[str, Any]] = []
for _d in _state[0].deckies:
_deckies.append(_d.model_dump())
return _deckies
async def get_user_by_username(self, username: str) -> Optional[dict[str, Any]]:
async with aiosqlite.connect(self.db_path) as _db:
_db.row_factory = aiosqlite.Row
async with _db.execute("SELECT * FROM users WHERE username = ?", (username,)) as _cursor:
_row: Optional[aiosqlite.Row] = await _cursor.fetchone()
return dict(_row) if _row else None
async def get_user_by_uuid(self, uuid: str) -> Optional[dict[str, Any]]:
async with aiosqlite.connect(self.db_path) as _db:
_db.row_factory = aiosqlite.Row
async with _db.execute("SELECT * FROM users WHERE uuid = ?", (uuid,)) as _cursor:
_row: Optional[aiosqlite.Row] = await _cursor.fetchone()
return dict(_row) if _row else None
async def create_user(self, user_data: dict[str, Any]) -> None:
async with aiosqlite.connect(self.db_path) as _db:
await _db.execute(
"INSERT INTO users (uuid, username, password_hash, role, must_change_password) VALUES (?, ?, ?, ?, ?)",
(
user_data["uuid"],
user_data["username"],
user_data["password_hash"],
user_data["role"],
user_data.get("must_change_password", False)
)
)
await _db.commit()
async def update_user_password(self, uuid: str, password_hash: str, must_change_password: bool = False) -> None:
async with aiosqlite.connect(self.db_path) as _db:
await _db.execute(
"UPDATE users SET password_hash = ?, must_change_password = ? WHERE uuid = ?",
(password_hash, must_change_password, uuid)
)
await _db.commit()

View File

@@ -12,7 +12,8 @@
"lucide-react": "^1.7.0", "lucide-react": "^1.7.0",
"react": "^19.2.4", "react": "^19.2.4",
"react-dom": "^19.2.4", "react-dom": "^19.2.4",
"react-router-dom": "^7.14.0" "react-router-dom": "^7.14.0",
"recharts": "^3.8.1"
}, },
"devDependencies": { "devDependencies": {
"@eslint/js": "^9.39.4", "@eslint/js": "^9.39.4",
@@ -591,6 +592,42 @@
"url": "https://github.com/sponsors/Boshen" "url": "https://github.com/sponsors/Boshen"
} }
}, },
"node_modules/@reduxjs/toolkit": {
"version": "2.11.2",
"resolved": "https://registry.npmjs.org/@reduxjs/toolkit/-/toolkit-2.11.2.tgz",
"integrity": "sha512-Kd6kAHTA6/nUpp8mySPqj3en3dm0tdMIgbttnQ1xFMVpufoj+ADi8pXLBsd4xzTRHQa7t/Jv8W5UnCuW4kuWMQ==",
"license": "MIT",
"dependencies": {
"@standard-schema/spec": "^1.0.0",
"@standard-schema/utils": "^0.3.0",
"immer": "^11.0.0",
"redux": "^5.0.1",
"redux-thunk": "^3.1.0",
"reselect": "^5.1.0"
},
"peerDependencies": {
"react": "^16.9.0 || ^17.0.0 || ^18 || ^19",
"react-redux": "^7.2.1 || ^8.1.3 || ^9.0.0"
},
"peerDependenciesMeta": {
"react": {
"optional": true
},
"react-redux": {
"optional": true
}
}
},
"node_modules/@reduxjs/toolkit/node_modules/immer": {
"version": "11.1.4",
"resolved": "https://registry.npmjs.org/immer/-/immer-11.1.4.tgz",
"integrity": "sha512-XREFCPo6ksxVzP4E0ekD5aMdf8WMwmdNaz6vuvxgI40UaEiu6q3p8X52aU6GdyvLY3XXX/8R7JOTXStz/nBbRw==",
"license": "MIT",
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/immer"
}
},
"node_modules/@rolldown/binding-android-arm64": { "node_modules/@rolldown/binding-android-arm64": {
"version": "1.0.0-rc.13", "version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-android-arm64/-/binding-android-arm64-1.0.0-rc.13.tgz", "resolved": "https://registry.npmjs.org/@rolldown/binding-android-arm64/-/binding-android-arm64-1.0.0-rc.13.tgz",
@@ -855,6 +892,18 @@
"dev": true, "dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/@standard-schema/spec": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/@standard-schema/spec/-/spec-1.1.0.tgz",
"integrity": "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==",
"license": "MIT"
},
"node_modules/@standard-schema/utils": {
"version": "0.3.0",
"resolved": "https://registry.npmjs.org/@standard-schema/utils/-/utils-0.3.0.tgz",
"integrity": "sha512-e7Mew686owMaPJVNNLs55PUvgz371nKgwsc4vxE49zsODpJEnxgxRo2y/OKrqueavXgZNMDVj3DdHFlaSAeU8g==",
"license": "MIT"
},
"node_modules/@tybys/wasm-util": { "node_modules/@tybys/wasm-util": {
"version": "0.10.1", "version": "0.10.1",
"resolved": "https://registry.npmjs.org/@tybys/wasm-util/-/wasm-util-0.10.1.tgz", "resolved": "https://registry.npmjs.org/@tybys/wasm-util/-/wasm-util-0.10.1.tgz",
@@ -866,6 +915,69 @@
"tslib": "^2.4.0" "tslib": "^2.4.0"
} }
}, },
"node_modules/@types/d3-array": {
"version": "3.2.2",
"resolved": "https://registry.npmjs.org/@types/d3-array/-/d3-array-3.2.2.tgz",
"integrity": "sha512-hOLWVbm7uRza0BYXpIIW5pxfrKe0W+D5lrFiAEYR+pb6w3N2SwSMaJbXdUfSEv+dT4MfHBLtn5js0LAWaO6otw==",
"license": "MIT"
},
"node_modules/@types/d3-color": {
"version": "3.1.3",
"resolved": "https://registry.npmjs.org/@types/d3-color/-/d3-color-3.1.3.tgz",
"integrity": "sha512-iO90scth9WAbmgv7ogoq57O9YpKmFBbmoEoCHDB2xMBY0+/KVrqAaCDyCE16dUspeOvIxFFRI+0sEtqDqy2b4A==",
"license": "MIT"
},
"node_modules/@types/d3-ease": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/@types/d3-ease/-/d3-ease-3.0.2.tgz",
"integrity": "sha512-NcV1JjO5oDzoK26oMzbILE6HW7uVXOHLQvHshBUW4UMdZGfiY6v5BeQwh9a9tCzv+CeefZQHJt5SRgK154RtiA==",
"license": "MIT"
},
"node_modules/@types/d3-interpolate": {
"version": "3.0.4",
"resolved": "https://registry.npmjs.org/@types/d3-interpolate/-/d3-interpolate-3.0.4.tgz",
"integrity": "sha512-mgLPETlrpVV1YRJIglr4Ez47g7Yxjl1lj7YKsiMCb27VJH9W8NVM6Bb9d8kkpG/uAQS5AmbA48q2IAolKKo1MA==",
"license": "MIT",
"dependencies": {
"@types/d3-color": "*"
}
},
"node_modules/@types/d3-path": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/@types/d3-path/-/d3-path-3.1.1.tgz",
"integrity": "sha512-VMZBYyQvbGmWyWVea0EHs/BwLgxc+MKi1zLDCONksozI4YJMcTt8ZEuIR4Sb1MMTE8MMW49v0IwI5+b7RmfWlg==",
"license": "MIT"
},
"node_modules/@types/d3-scale": {
"version": "4.0.9",
"resolved": "https://registry.npmjs.org/@types/d3-scale/-/d3-scale-4.0.9.tgz",
"integrity": "sha512-dLmtwB8zkAeO/juAMfnV+sItKjlsw2lKdZVVy6LRr0cBmegxSABiLEpGVmSJJ8O08i4+sGR6qQtb6WtuwJdvVw==",
"license": "MIT",
"dependencies": {
"@types/d3-time": "*"
}
},
"node_modules/@types/d3-shape": {
"version": "3.1.8",
"resolved": "https://registry.npmjs.org/@types/d3-shape/-/d3-shape-3.1.8.tgz",
"integrity": "sha512-lae0iWfcDeR7qt7rA88BNiqdvPS5pFVPpo5OfjElwNaT2yyekbM0C9vK+yqBqEmHr6lDkRnYNoTBYlAgJa7a4w==",
"license": "MIT",
"dependencies": {
"@types/d3-path": "*"
}
},
"node_modules/@types/d3-time": {
"version": "3.0.4",
"resolved": "https://registry.npmjs.org/@types/d3-time/-/d3-time-3.0.4.tgz",
"integrity": "sha512-yuzZug1nkAAaBlBBikKZTgzCeA+k1uy4ZFwWANOfKw5z5LRhV0gNA7gNkKm7HoK+HRN0wX3EkxGk0fpbWhmB7g==",
"license": "MIT"
},
"node_modules/@types/d3-timer": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/@types/d3-timer/-/d3-timer-3.0.2.tgz",
"integrity": "sha512-Ps3T8E8dZDam6fUyNiMkekK3XUsaUEik+idO9/YjPtfj2qruF8tFBXS7XhtE4iIXBLxhmLjP3SXpLhVf21I9Lw==",
"license": "MIT"
},
"node_modules/@types/estree": { "node_modules/@types/estree": {
"version": "1.0.8", "version": "1.0.8",
"resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz", "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz",
@@ -894,7 +1006,7 @@
"version": "19.2.14", "version": "19.2.14",
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.14.tgz", "resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.14.tgz",
"integrity": "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w==", "integrity": "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w==",
"dev": true, "devOptional": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"csstype": "^3.2.2" "csstype": "^3.2.2"
@@ -910,6 +1022,12 @@
"@types/react": "^19.2.0" "@types/react": "^19.2.0"
} }
}, },
"node_modules/@types/use-sync-external-store": {
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/@types/use-sync-external-store/-/use-sync-external-store-0.0.6.tgz",
"integrity": "sha512-zFDAD+tlpf2r4asuHEj0XH6pY6i0g5NeAHPn+15wk3BV6JA69eERFXC1gyGThDkVa1zCyKr5jox1+2LbV/AMLg==",
"license": "MIT"
},
"node_modules/@typescript-eslint/eslint-plugin": { "node_modules/@typescript-eslint/eslint-plugin": {
"version": "8.58.0", "version": "8.58.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.58.0.tgz", "resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.58.0.tgz",
@@ -1437,6 +1555,15 @@
"url": "https://github.com/chalk/chalk?sponsor=1" "url": "https://github.com/chalk/chalk?sponsor=1"
} }
}, },
"node_modules/clsx": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/clsx/-/clsx-2.1.1.tgz",
"integrity": "sha512-eYm0QWBtUrBWZWG0d386OGAw16Z995PiOVo2B7bjWSbHedGl5e0ZWaq65kOGgUSNesEIDkB9ISbTg/JK9dhCZA==",
"license": "MIT",
"engines": {
"node": ">=6"
}
},
"node_modules/color-convert": { "node_modules/color-convert": {
"version": "2.0.1", "version": "2.0.1",
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
@@ -1515,9 +1642,130 @@
"version": "3.2.3", "version": "3.2.3",
"resolved": "https://registry.npmjs.org/csstype/-/csstype-3.2.3.tgz", "resolved": "https://registry.npmjs.org/csstype/-/csstype-3.2.3.tgz",
"integrity": "sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==", "integrity": "sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==",
"dev": true, "devOptional": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/d3-array": {
"version": "3.2.4",
"resolved": "https://registry.npmjs.org/d3-array/-/d3-array-3.2.4.tgz",
"integrity": "sha512-tdQAmyA18i4J7wprpYq8ClcxZy3SC31QMeByyCFyRt7BVHdREQZ5lpzoe5mFEYZUWe+oq8HBvk9JjpibyEV4Jg==",
"license": "ISC",
"dependencies": {
"internmap": "1 - 2"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-color": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/d3-color/-/d3-color-3.1.0.tgz",
"integrity": "sha512-zg/chbXyeBtMQ1LbD/WSoW2DpC3I0mpmPdW+ynRTj/x2DAWYrIY7qeZIHidozwV24m4iavr15lNwIwLxRmOxhA==",
"license": "ISC",
"engines": {
"node": ">=12"
}
},
"node_modules/d3-ease": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/d3-ease/-/d3-ease-3.0.1.tgz",
"integrity": "sha512-wR/XK3D3XcLIZwpbvQwQ5fK+8Ykds1ip7A2Txe0yxncXSdq1L9skcG7blcedkOX+ZcgxGAmLX1FrRGbADwzi0w==",
"license": "BSD-3-Clause",
"engines": {
"node": ">=12"
}
},
"node_modules/d3-format": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/d3-format/-/d3-format-3.1.2.tgz",
"integrity": "sha512-AJDdYOdnyRDV5b6ArilzCPPwc1ejkHcoyFarqlPqT7zRYjhavcT3uSrqcMvsgh2CgoPbK3RCwyHaVyxYcP2Arg==",
"license": "ISC",
"engines": {
"node": ">=12"
}
},
"node_modules/d3-interpolate": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/d3-interpolate/-/d3-interpolate-3.0.1.tgz",
"integrity": "sha512-3bYs1rOD33uo8aqJfKP3JWPAibgw8Zm2+L9vBKEHJ2Rg+viTR7o5Mmv5mZcieN+FRYaAOWX5SJATX6k1PWz72g==",
"license": "ISC",
"dependencies": {
"d3-color": "1 - 3"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-path": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/d3-path/-/d3-path-3.1.0.tgz",
"integrity": "sha512-p3KP5HCf/bvjBSSKuXid6Zqijx7wIfNW+J/maPs+iwR35at5JCbLUT0LzF1cnjbCHWhqzQTIN2Jpe8pRebIEFQ==",
"license": "ISC",
"engines": {
"node": ">=12"
}
},
"node_modules/d3-scale": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/d3-scale/-/d3-scale-4.0.2.tgz",
"integrity": "sha512-GZW464g1SH7ag3Y7hXjf8RoUuAFIqklOAq3MRl4OaWabTFJY9PN/E1YklhXLh+OQ3fM9yS2nOkCoS+WLZ6kvxQ==",
"license": "ISC",
"dependencies": {
"d3-array": "2.10.0 - 3",
"d3-format": "1 - 3",
"d3-interpolate": "1.2.0 - 3",
"d3-time": "2.1.1 - 3",
"d3-time-format": "2 - 4"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-shape": {
"version": "3.2.0",
"resolved": "https://registry.npmjs.org/d3-shape/-/d3-shape-3.2.0.tgz",
"integrity": "sha512-SaLBuwGm3MOViRq2ABk3eLoxwZELpH6zhl3FbAoJ7Vm1gofKx6El1Ib5z23NUEhF9AsGl7y+dzLe5Cw2AArGTA==",
"license": "ISC",
"dependencies": {
"d3-path": "^3.1.0"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-time": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/d3-time/-/d3-time-3.1.0.tgz",
"integrity": "sha512-VqKjzBLejbSMT4IgbmVgDjpkYrNWUYJnbCGo874u7MMKIWsILRX+OpX/gTk8MqjpT1A/c6HY2dCA77ZN0lkQ2Q==",
"license": "ISC",
"dependencies": {
"d3-array": "2 - 3"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-time-format": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/d3-time-format/-/d3-time-format-4.1.0.tgz",
"integrity": "sha512-dJxPBlzC7NugB2PDLwo9Q8JiTR3M3e4/XANkreKSUxF8vvXKqm1Yfq4Q5dl8budlunRVlUUaDUgFt7eA8D6NLg==",
"license": "ISC",
"dependencies": {
"d3-time": "1 - 3"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-timer": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/d3-timer/-/d3-timer-3.0.1.tgz",
"integrity": "sha512-ndfJ/JxxMd3nw31uyKoY2naivF+r29V+Lc0svZxe1JvvIRmi8hUsrMvdOwgS1o6uBHmiz91geQ0ylPP0aj1VUA==",
"license": "ISC",
"engines": {
"node": ">=12"
}
},
"node_modules/debug": { "node_modules/debug": {
"version": "4.4.3", "version": "4.4.3",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz", "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz",
@@ -1536,6 +1784,12 @@
} }
} }
}, },
"node_modules/decimal.js-light": {
"version": "2.5.1",
"resolved": "https://registry.npmjs.org/decimal.js-light/-/decimal.js-light-2.5.1.tgz",
"integrity": "sha512-qIMFpTMZmny+MMIitAB6D7iVPEorVw6YQRWkvarTkT4tBeSLLiHzcwj6q0MmYSFCiVpiqPJTJEYIrpcPzVEIvg==",
"license": "MIT"
},
"node_modules/deep-is": { "node_modules/deep-is": {
"version": "0.1.4", "version": "0.1.4",
"resolved": "https://registry.npmjs.org/deep-is/-/deep-is-0.1.4.tgz", "resolved": "https://registry.npmjs.org/deep-is/-/deep-is-0.1.4.tgz",
@@ -1628,6 +1882,16 @@
"node": ">= 0.4" "node": ">= 0.4"
} }
}, },
"node_modules/es-toolkit": {
"version": "1.45.1",
"resolved": "https://registry.npmjs.org/es-toolkit/-/es-toolkit-1.45.1.tgz",
"integrity": "sha512-/jhoOj/Fx+A+IIyDNOvO3TItGmlMKhtX8ISAHKE90c4b/k1tqaqEZ+uUqfpU8DMnW5cgNJv606zS55jGvza0Xw==",
"license": "MIT",
"workspaces": [
"docs",
"benchmarks"
]
},
"node_modules/escalade": { "node_modules/escalade": {
"version": "3.2.0", "version": "3.2.0",
"resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz", "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz",
@@ -1835,6 +2099,12 @@
"node": ">=0.10.0" "node": ">=0.10.0"
} }
}, },
"node_modules/eventemitter3": {
"version": "5.0.4",
"resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-5.0.4.tgz",
"integrity": "sha512-mlsTRyGaPBjPedk6Bvw+aqbsXDtoAyAzm5MO7JgU+yVRyMQ5O8bD4Kcci7BS85f93veegeCPkL8R4GLClnjLFw==",
"license": "MIT"
},
"node_modules/fast-deep-equal": { "node_modules/fast-deep-equal": {
"version": "3.1.3", "version": "3.1.3",
"resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
@@ -2146,6 +2416,16 @@
"node": ">= 4" "node": ">= 4"
} }
}, },
"node_modules/immer": {
"version": "10.2.0",
"resolved": "https://registry.npmjs.org/immer/-/immer-10.2.0.tgz",
"integrity": "sha512-d/+XTN3zfODyjr89gM3mPq1WNX2B8pYsu7eORitdwyA2sBubnTl3laYlBk4sXY5FUa5qTZGBDPJICVbvqzjlbw==",
"license": "MIT",
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/immer"
}
},
"node_modules/import-fresh": { "node_modules/import-fresh": {
"version": "3.3.1", "version": "3.3.1",
"resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.3.1.tgz", "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.3.1.tgz",
@@ -2173,6 +2453,15 @@
"node": ">=0.8.19" "node": ">=0.8.19"
} }
}, },
"node_modules/internmap": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/internmap/-/internmap-2.0.3.tgz",
"integrity": "sha512-5Hh7Y1wQbvY5ooGgPbDaL5iYLAPzMTUrjMulskHLH6wnv/A+1q5rgEaiuqEjB+oxGXIVZs1FF+R/KPN3ZSQYYg==",
"license": "ISC",
"engines": {
"node": ">=12"
}
},
"node_modules/is-extglob": { "node_modules/is-extglob": {
"version": "2.1.1", "version": "2.1.1",
"resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz",
@@ -2862,6 +3151,36 @@
"react": "^19.2.4" "react": "^19.2.4"
} }
}, },
"node_modules/react-is": {
"version": "19.2.4",
"resolved": "https://registry.npmjs.org/react-is/-/react-is-19.2.4.tgz",
"integrity": "sha512-W+EWGn2v0ApPKgKKCy/7s7WHXkboGcsrXE+2joLyVxkbyVQfO3MUEaUQDHoSmb8TFFrSKYa9mw64WZHNHSDzYA==",
"license": "MIT",
"peer": true
},
"node_modules/react-redux": {
"version": "9.2.0",
"resolved": "https://registry.npmjs.org/react-redux/-/react-redux-9.2.0.tgz",
"integrity": "sha512-ROY9fvHhwOD9ySfrF0wmvu//bKCQ6AeZZq1nJNtbDC+kk5DuSuNX/n6YWYF/SYy7bSba4D4FSz8DJeKY/S/r+g==",
"license": "MIT",
"dependencies": {
"@types/use-sync-external-store": "^0.0.6",
"use-sync-external-store": "^1.4.0"
},
"peerDependencies": {
"@types/react": "^18.2.25 || ^19",
"react": "^18.0 || ^19",
"redux": "^5.0.0"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"redux": {
"optional": true
}
}
},
"node_modules/react-router": { "node_modules/react-router": {
"version": "7.14.0", "version": "7.14.0",
"resolved": "https://registry.npmjs.org/react-router/-/react-router-7.14.0.tgz", "resolved": "https://registry.npmjs.org/react-router/-/react-router-7.14.0.tgz",
@@ -2900,6 +3219,57 @@
"react-dom": ">=18" "react-dom": ">=18"
} }
}, },
"node_modules/recharts": {
"version": "3.8.1",
"resolved": "https://registry.npmjs.org/recharts/-/recharts-3.8.1.tgz",
"integrity": "sha512-mwzmO1s9sFL0TduUpwndxCUNoXsBw3u3E/0+A+cLcrSfQitSG62L32N69GhqUrrT5qKcAE3pCGVINC6pqkBBQg==",
"license": "MIT",
"workspaces": [
"www"
],
"dependencies": {
"@reduxjs/toolkit": "^1.9.0 || 2.x.x",
"clsx": "^2.1.1",
"decimal.js-light": "^2.5.1",
"es-toolkit": "^1.39.3",
"eventemitter3": "^5.0.1",
"immer": "^10.1.1",
"react-redux": "8.x.x || 9.x.x",
"reselect": "5.1.1",
"tiny-invariant": "^1.3.3",
"use-sync-external-store": "^1.2.2",
"victory-vendor": "^37.0.2"
},
"engines": {
"node": ">=18"
},
"peerDependencies": {
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0",
"react-dom": "^16.0.0 || ^17.0.0 || ^18.0.0 || ^19.0.0",
"react-is": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
}
},
"node_modules/redux": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/redux/-/redux-5.0.1.tgz",
"integrity": "sha512-M9/ELqF6fy8FwmkpnF0S3YKOqMyoWJ4+CS5Efg2ct3oY9daQvd/Pc71FpGZsVsbl3Cpb+IIcjBDUnnyBdQbq4w==",
"license": "MIT"
},
"node_modules/redux-thunk": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/redux-thunk/-/redux-thunk-3.1.0.tgz",
"integrity": "sha512-NW2r5T6ksUKXCabzhL9z+h206HQw/NJkcLm1GPImRQ8IzfXwRGqjVhKJGauHirT0DAuyy6hjdnMZaRoAcy0Klw==",
"license": "MIT",
"peerDependencies": {
"redux": "^5.0.0"
}
},
"node_modules/reselect": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/reselect/-/reselect-5.1.1.tgz",
"integrity": "sha512-K/BG6eIky/SBpzfHZv/dd+9JBFiS4SWV7FIujVyJRux6e45+73RaUHXLmIR1f7WOMaQ0U1km6qwklRQxpJJY0w==",
"license": "MIT"
},
"node_modules/resolve-from": { "node_modules/resolve-from": {
"version": "4.0.0", "version": "4.0.0",
"resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-4.0.0.tgz", "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-4.0.0.tgz",
@@ -3032,6 +3402,12 @@
"node": ">=8" "node": ">=8"
} }
}, },
"node_modules/tiny-invariant": {
"version": "1.3.3",
"resolved": "https://registry.npmjs.org/tiny-invariant/-/tiny-invariant-1.3.3.tgz",
"integrity": "sha512-+FbBPE1o9QAYvviau/qC5SE3caw21q3xkvWKBtja5vgqOWIHHJ3ioaq1VPfn/Szqctz2bU/oYeKd9/z5BL+PVg==",
"license": "MIT"
},
"node_modules/tinyglobby": { "node_modules/tinyglobby": {
"version": "0.2.15", "version": "0.2.15",
"resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz", "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz",
@@ -3169,6 +3545,37 @@
"punycode": "^2.1.0" "punycode": "^2.1.0"
} }
}, },
"node_modules/use-sync-external-store": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/use-sync-external-store/-/use-sync-external-store-1.6.0.tgz",
"integrity": "sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w==",
"license": "MIT",
"peerDependencies": {
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
}
},
"node_modules/victory-vendor": {
"version": "37.3.6",
"resolved": "https://registry.npmjs.org/victory-vendor/-/victory-vendor-37.3.6.tgz",
"integrity": "sha512-SbPDPdDBYp+5MJHhBCAyI7wKM3d5ivekigc2Dk2s7pgbZ9wIgIBYGVw4zGHBml/qTFbexrofXW6Gu4noGxrOwQ==",
"license": "MIT AND ISC",
"dependencies": {
"@types/d3-array": "^3.0.3",
"@types/d3-ease": "^3.0.0",
"@types/d3-interpolate": "^3.0.1",
"@types/d3-scale": "^4.0.2",
"@types/d3-shape": "^3.1.0",
"@types/d3-time": "^3.0.0",
"@types/d3-timer": "^3.0.0",
"d3-array": "^3.1.6",
"d3-ease": "^3.0.1",
"d3-interpolate": "^3.0.1",
"d3-scale": "^4.0.2",
"d3-shape": "^3.1.0",
"d3-time": "^3.0.0",
"d3-timer": "^3.0.1"
}
},
"node_modules/vite": { "node_modules/vite": {
"version": "8.0.7", "version": "8.0.7",
"resolved": "https://registry.npmjs.org/vite/-/vite-8.0.7.tgz", "resolved": "https://registry.npmjs.org/vite/-/vite-8.0.7.tgz",

View File

@@ -14,7 +14,8 @@
"lucide-react": "^1.7.0", "lucide-react": "^1.7.0",
"react": "^19.2.4", "react": "^19.2.4",
"react-dom": "^19.2.4", "react-dom": "^19.2.4",
"react-router-dom": "^7.14.0" "react-router-dom": "^7.14.0",
"recharts": "^3.8.1"
}, },
"devDependencies": { "devDependencies": {
"@eslint/js": "^9.39.4", "@eslint/js": "^9.39.4",

View File

@@ -7,6 +7,7 @@ import DeckyFleet from './components/DeckyFleet';
import LiveLogs from './components/LiveLogs'; import LiveLogs from './components/LiveLogs';
import Attackers from './components/Attackers'; import Attackers from './components/Attackers';
import Config from './components/Config'; import Config from './components/Config';
import Bounty from './components/Bounty';
function App() { function App() {
const [token, setToken] = useState<string | null>(localStorage.getItem('token')); const [token, setToken] = useState<string | null>(localStorage.getItem('token'));
@@ -43,6 +44,7 @@ function App() {
<Route path="/" element={<Dashboard searchQuery={searchQuery} />} /> <Route path="/" element={<Dashboard searchQuery={searchQuery} />} />
<Route path="/fleet" element={<DeckyFleet />} /> <Route path="/fleet" element={<DeckyFleet />} />
<Route path="/live-logs" element={<LiveLogs />} /> <Route path="/live-logs" element={<LiveLogs />} />
<Route path="/bounty" element={<Bounty />} />
<Route path="/attackers" element={<Attackers />} /> <Route path="/attackers" element={<Attackers />} />
<Route path="/config" element={<Config />} /> <Route path="/config" element={<Config />} />
<Route path="*" element={<Navigate to="/" replace />} /> <Route path="*" element={<Navigate to="/" replace />} />

View File

@@ -0,0 +1,191 @@
import React, { useEffect, useState } from 'react';
import { useSearchParams } from 'react-router-dom';
import { Archive, Search, ChevronLeft, ChevronRight, Filter } from 'lucide-react';
import api from '../utils/api';
import './Dashboard.css';
interface BountyEntry {
id: number;
timestamp: string;
decky: string;
service: string;
attacker_ip: string;
bounty_type: string;
payload: any;
}
const Bounty: React.FC = () => {
const [searchParams, setSearchParams] = useSearchParams();
const query = searchParams.get('q') || '';
const typeFilter = searchParams.get('type') || '';
const page = parseInt(searchParams.get('page') || '1');
const [bounties, setBounties] = useState<BountyEntry[]>([]);
const [total, setTotal] = useState(0);
const [loading, setLoading] = useState(true);
const [searchInput, setSearchInput] = useState(query);
const limit = 50;
const fetchBounties = async () => {
setLoading(true);
try {
const offset = (page - 1) * limit;
let url = `/bounty?limit=${limit}&offset=${offset}`;
if (query) url += `&search=${encodeURIComponent(query)}`;
if (typeFilter) url += `&bounty_type=${typeFilter}`;
const res = await api.get(url);
setBounties(res.data.data);
setTotal(res.data.total);
} catch (err) {
console.error('Failed to fetch bounties', err);
} finally {
setLoading(false);
}
};
useEffect(() => {
fetchBounties();
}, [query, typeFilter, page]);
const handleSearch = (e: React.FormEvent) => {
e.preventDefault();
setSearchParams({ q: searchInput, type: typeFilter, page: '1' });
};
const setPage = (p: number) => {
setSearchParams({ q: query, type: typeFilter, page: p.toString() });
};
const setType = (t: string) => {
setSearchParams({ q: query, type: t, page: '1' });
};
const totalPages = Math.ceil(total / limit);
return (
<div className="dashboard">
{/* Page Header */}
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}>
<div style={{ display: 'flex', alignItems: 'center', gap: '16px' }}>
<Archive size={32} className="violet-accent" />
<h1 style={{ fontSize: '1.5rem', letterSpacing: '4px' }}>BOUNTY VAULT</h1>
</div>
<div style={{ display: 'flex', gap: '16px', alignItems: 'center' }}>
<div style={{ display: 'flex', alignItems: 'center', gap: '8px', border: '1px solid var(--border-color)', padding: '4px 12px' }}>
<Filter size={16} className="dim" />
<select
value={typeFilter}
onChange={(e) => setType(e.target.value)}
style={{ background: 'transparent', border: 'none', color: 'inherit', fontSize: '0.8rem', outline: 'none' }}
>
<option value="">ALL TYPES</option>
<option value="credential">CREDENTIALS</option>
<option value="payload">PAYLOADS</option>
</select>
</div>
<form onSubmit={handleSearch} style={{ display: 'flex', alignItems: 'center', border: '1px solid var(--border-color)', padding: '4px 12px' }}>
<Search size={18} style={{ opacity: 0.5, marginRight: '8px' }} />
<input
type="text"
placeholder="Search bounty..."
value={searchInput}
onChange={(e) => setSearchInput(e.target.value)}
style={{ background: 'transparent', border: 'none', padding: '4px', fontSize: '0.8rem', width: '200px' }}
/>
</form>
</div>
</div>
<div className="logs-section">
<div className="section-header" style={{ justifyContent: 'space-between' }}>
<div style={{ display: 'flex', alignItems: 'center', gap: '12px' }}>
<span className="matrix-text" style={{ fontSize: '0.8rem' }}>{total} ARTIFACTS CAPTURED</span>
</div>
<div style={{ display: 'flex', alignItems: 'center', gap: '16px' }}>
<span className="dim" style={{ fontSize: '0.8rem' }}>
Page {page} of {totalPages || 1}
</span>
<div style={{ display: 'flex', gap: '8px' }}>
<button
disabled={page <= 1}
onClick={() => setPage(page - 1)}
style={{ padding: '4px', border: '1px solid var(--border-color)', opacity: page <= 1 ? 0.3 : 1 }}
>
<ChevronLeft size={16} />
</button>
<button
disabled={page >= totalPages}
onClick={() => setPage(page + 1)}
style={{ padding: '4px', border: '1px solid var(--border-color)', opacity: page >= totalPages ? 0.3 : 1 }}
>
<ChevronRight size={16} />
</button>
</div>
</div>
</div>
<div className="logs-table-container">
<table className="logs-table">
<thead>
<tr>
<th>TIMESTAMP</th>
<th>DECKY</th>
<th>SERVICE</th>
<th>ATTACKER</th>
<th>TYPE</th>
<th>DATA</th>
</tr>
</thead>
<tbody>
{bounties.length > 0 ? bounties.map((b) => (
<tr key={b.id}>
<td className="dim" style={{ fontSize: '0.75rem', whiteSpace: 'nowrap' }}>{new Date(b.timestamp).toLocaleString()}</td>
<td className="violet-accent">{b.decky}</td>
<td>{b.service}</td>
<td className="matrix-text">{b.attacker_ip}</td>
<td>
<span style={{
fontSize: '0.7rem',
padding: '2px 8px',
borderRadius: '4px',
border: `1px solid ${b.bounty_type === 'credential' ? 'var(--text-color)' : 'var(--accent-color)'}`,
backgroundColor: b.bounty_type === 'credential' ? 'rgba(0, 255, 65, 0.1)' : 'rgba(238, 130, 238, 0.1)',
color: b.bounty_type === 'credential' ? 'var(--text-color)' : 'var(--accent-color)'
}}>
{b.bounty_type.toUpperCase()}
</span>
</td>
<td>
<div style={{ fontSize: '0.9rem' }}>
{b.bounty_type === 'credential' ? (
<div style={{ display: 'flex', gap: '12px' }}>
<span><span className="dim" style={{ marginRight: '4px' }}>user:</span>{b.payload.username}</span>
<span><span className="dim" style={{ marginRight: '4px' }}>pass:</span>{b.payload.password}</span>
</div>
) : (
<span className="dim" style={{ fontSize: '0.8rem' }}>{JSON.stringify(b.payload)}</span>
)}
</div>
</td>
</tr>
)) : (
<tr>
<td colSpan={6} style={{ textAlign: 'center', padding: '60px', opacity: 0.5, letterSpacing: '4px' }}>
{loading ? 'RETRIEVING ARTIFACTS...' : 'THE VAULT IS EMPTY'}
</td>
</tr>
)}
</tbody>
</table>
</div>
</div>
</div>
);
};
export default Bounty;

View File

@@ -1,5 +1,4 @@
import React, { useEffect, useState } from 'react'; import React, { useEffect, useState } from 'react';
import api from '../utils/api';
import './Dashboard.css'; import './Dashboard.css';
import { Shield, Users, Activity, Clock } from 'lucide-react'; import { Shield, Users, Activity, Clock } from 'lucide-react';
@@ -31,26 +30,7 @@ const Dashboard: React.FC<DashboardProps> = ({ searchQuery }) => {
const [logs, setLogs] = useState<LogEntry[]>([]); const [logs, setLogs] = useState<LogEntry[]>([]);
const [loading, setLoading] = useState(true); const [loading, setLoading] = useState(true);
const fetchData = async () => {
try {
const [statsRes, logsRes] = await Promise.all([
api.get('/stats'),
api.get('/logs', { params: { limit: 50, search: searchQuery } })
]);
setStats(statsRes.data);
setLogs(logsRes.data.data);
} catch (err) {
console.error('Failed to fetch dashboard data', err);
} finally {
setLoading(false);
}
};
useEffect(() => { useEffect(() => {
// Initial fetch to populate UI immediately
fetchData();
// Setup SSE connection
const token = localStorage.getItem('token'); const token = localStorage.getItem('token');
const baseUrl = import.meta.env.VITE_API_URL || 'http://localhost:8000/api/v1'; const baseUrl = import.meta.env.VITE_API_URL || 'http://localhost:8000/api/v1';
let url = `${baseUrl}/stream?token=${token}`; let url = `${baseUrl}/stream?token=${token}`;
@@ -64,13 +44,10 @@ const Dashboard: React.FC<DashboardProps> = ({ searchQuery }) => {
try { try {
const payload = JSON.parse(event.data); const payload = JSON.parse(event.data);
if (payload.type === 'logs') { if (payload.type === 'logs') {
setLogs(prev => { setLogs(prev => [...payload.data, ...prev].slice(0, 100));
const newLogs = payload.data;
// Prepend new logs, keep up to 100 in UI to prevent infinite DOM growth
return [...newLogs, ...prev].slice(0, 100);
});
} else if (payload.type === 'stats') { } else if (payload.type === 'stats') {
setStats(payload.data); setStats(payload.data);
setLoading(false);
} }
} catch (err) { } catch (err) {
console.error('Failed to parse SSE payload', err); console.error('Failed to parse SSE payload', err);

View File

@@ -1,6 +1,7 @@
import React, { useState } from 'react'; import React, { useState, useEffect } from 'react';
import { NavLink } from 'react-router-dom'; import { NavLink } from 'react-router-dom';
import { Menu, X, Search, Activity, LayoutDashboard, Terminal, Settings, LogOut, Server } from 'lucide-react'; import { Menu, X, Search, Activity, LayoutDashboard, Terminal, Settings, LogOut, Server, Archive } from 'lucide-react';
import api from '../utils/api';
import './Layout.css'; import './Layout.css';
interface LayoutProps { interface LayoutProps {
@@ -12,12 +13,27 @@ interface LayoutProps {
const Layout: React.FC<LayoutProps> = ({ children, onLogout, onSearch }) => { const Layout: React.FC<LayoutProps> = ({ children, onLogout, onSearch }) => {
const [sidebarOpen, setSidebarOpen] = useState(true); const [sidebarOpen, setSidebarOpen] = useState(true);
const [search, setSearch] = useState(''); const [search, setSearch] = useState('');
const [systemActive, setSystemActive] = useState(false);
const handleSearchSubmit = (e: React.FormEvent) => { const handleSearchSubmit = (e: React.FormEvent) => {
e.preventDefault(); e.preventDefault();
onSearch(search); onSearch(search);
}; };
useEffect(() => {
const fetchStatus = async () => {
try {
const res = await api.get('/stats');
setSystemActive(res.data.deployed_deckies > 0);
} catch (err) {
console.error('Failed to fetch system status', err);
}
};
fetchStatus();
const interval = setInterval(fetchStatus, 10000);
return () => clearInterval(interval);
}, []);
return ( return (
<div className="layout-container"> <div className="layout-container">
{/* Sidebar */} {/* Sidebar */}
@@ -34,6 +50,7 @@ const Layout: React.FC<LayoutProps> = ({ children, onLogout, onSearch }) => {
<NavItem to="/" icon={<LayoutDashboard size={20} />} label="Dashboard" open={sidebarOpen} /> <NavItem to="/" icon={<LayoutDashboard size={20} />} label="Dashboard" open={sidebarOpen} />
<NavItem to="/fleet" icon={<Server size={20} />} label="Decoy Fleet" open={sidebarOpen} /> <NavItem to="/fleet" icon={<Server size={20} />} label="Decoy Fleet" open={sidebarOpen} />
<NavItem to="/live-logs" icon={<Terminal size={20} />} label="Live Logs" open={sidebarOpen} /> <NavItem to="/live-logs" icon={<Terminal size={20} />} label="Live Logs" open={sidebarOpen} />
<NavItem to="/bounty" icon={<Archive size={20} />} label="Bounty" open={sidebarOpen} />
<NavItem to="/attackers" icon={<Activity size={20} />} label="Attackers" open={sidebarOpen} /> <NavItem to="/attackers" icon={<Activity size={20} />} label="Attackers" open={sidebarOpen} />
<NavItem to="/config" icon={<Settings size={20} />} label="Config" open={sidebarOpen} /> <NavItem to="/config" icon={<Settings size={20} />} label="Config" open={sidebarOpen} />
</nav> </nav>
@@ -60,7 +77,9 @@ const Layout: React.FC<LayoutProps> = ({ children, onLogout, onSearch }) => {
/> />
</form> </form>
<div className="topbar-status"> <div className="topbar-status">
<span className="matrix-text neon-blink">SYSTEM: ACTIVE</span> <span className="matrix-text" style={{ color: systemActive ? 'var(--text-color)' : 'var(--accent-color)' }}>
SYSTEM: {systemActive ? 'ACTIVE' : 'INACTIVE'}
</span>
</div> </div>
</header> </header>

View File

@@ -1,17 +1,341 @@
import React from 'react'; import React, { useEffect, useState, useRef } from 'react';
import { Terminal } from 'lucide-react'; import { useSearchParams } from 'react-router-dom';
import {
Terminal, Search, Activity,
ChevronLeft, ChevronRight, Play, Pause
} from 'lucide-react';
import {
BarChart, Bar, XAxis, YAxis, CartesianGrid, Tooltip, ResponsiveContainer, Cell
} from 'recharts';
import api from '../utils/api';
import './Dashboard.css'; import './Dashboard.css';
interface LogEntry {
id: number;
timestamp: string;
decky: string;
service: string;
event_type: string;
attacker_ip: string;
raw_line: string;
fields: string;
msg: string;
}
interface HistogramData {
time: string;
count: number;
}
const LiveLogs: React.FC = () => { const LiveLogs: React.FC = () => {
const [searchParams, setSearchParams] = useSearchParams();
// URL-synced state
const query = searchParams.get('q') || '';
const timeRange = searchParams.get('time') || '1h';
const isLive = searchParams.get('live') !== 'false';
const page = parseInt(searchParams.get('page') || '1');
// Local state
const [logs, setLogs] = useState<LogEntry[]>([]);
const [histogram, setHistogram] = useState<HistogramData[]>([]);
const [totalLogs, setTotalLogs] = useState(0);
const [loading, setLoading] = useState(true);
const [streaming, setStreaming] = useState(isLive);
const [searchInput, setSearchInput] = useState(query);
const eventSourceRef = useRef<EventSource | null>(null);
const limit = 50;
// Sync search input if URL changes (e.g. back button)
useEffect(() => {
setSearchInput(query);
}, [query]);
const fetchData = async () => {
if (streaming) return; // Don't fetch historical if streaming
setLoading(true);
try {
const offset = (page - 1) * limit;
let url = `/logs?limit=${limit}&offset=${offset}&search=${encodeURIComponent(query)}`;
// Calculate time bounds for historical fetch
const now = new Date();
let startTime: string | null = null;
if (timeRange !== 'all') {
const minutes = timeRange === '15m' ? 15 : timeRange === '1h' ? 60 : timeRange === '24h' ? 1440 : 0;
if (minutes > 0) {
startTime = new Date(now.getTime() - minutes * 60000).toISOString().replace('T', ' ').substring(0, 19);
url += `&start_time=${startTime}`;
}
}
const res = await api.get(url);
setLogs(res.data.data);
setTotalLogs(res.data.total);
// Fetch histogram for historical view
let histUrl = `/logs/histogram?search=${encodeURIComponent(query)}`;
if (startTime) histUrl += `&start_time=${startTime}`;
const histRes = await api.get(histUrl);
setHistogram(histRes.data);
} catch (err) {
console.error('Failed to fetch historical logs', err);
} finally {
setLoading(false);
}
};
const setupSSE = () => {
if (eventSourceRef.current) {
eventSourceRef.current.close();
}
const token = localStorage.getItem('token');
const baseUrl = import.meta.env.VITE_API_URL || 'http://localhost:8000/api/v1';
let url = `${baseUrl}/stream?token=${token}&search=${encodeURIComponent(query)}`;
if (timeRange !== 'all') {
const minutes = timeRange === '15m' ? 15 : timeRange === '1h' ? 60 : timeRange === '24h' ? 1440 : 0;
if (minutes > 0) {
const startTime = new Date(Date.now() - minutes * 60000).toISOString().replace('T', ' ').substring(0, 19);
url += `&start_time=${startTime}`;
}
}
const es = new EventSource(url);
eventSourceRef.current = es;
es.onmessage = (event) => {
try {
const payload = JSON.parse(event.data);
if (payload.type === 'logs') {
setLogs(prev => [...payload.data, ...prev].slice(0, 500));
} else if (payload.type === 'histogram') {
setHistogram(payload.data);
} else if (payload.type === 'stats') {
setTotalLogs(payload.data.total_logs);
}
} catch (err) {
console.error('Failed to parse SSE payload', err);
}
};
es.onerror = () => {
console.error('SSE connection lost, reconnecting...');
};
};
useEffect(() => {
if (streaming) {
setupSSE();
setLoading(false);
} else {
if (eventSourceRef.current) {
eventSourceRef.current.close();
eventSourceRef.current = null;
}
fetchData();
}
return () => {
if (eventSourceRef.current) {
eventSourceRef.current.close();
}
};
}, [query, timeRange, streaming, page]);
const handleSearch = (e: React.FormEvent) => {
e.preventDefault();
setSearchParams({ q: searchInput, time: timeRange, live: streaming.toString(), page: '1' });
};
const handleToggleLive = () => {
const newStreaming = !streaming;
setStreaming(newStreaming);
setSearchParams({ q: query, time: timeRange, live: newStreaming.toString(), page: '1' });
};
const handleTimeChange = (newTime: string) => {
setSearchParams({ q: query, time: newTime, live: streaming.toString(), page: '1' });
};
const changePage = (newPage: number) => {
setSearchParams({ q: query, time: timeRange, live: 'false', page: newPage.toString() });
};
return ( return (
<div className="logs-section"> <div className="dashboard">
<div className="section-header"> {/* Control Bar */}
<Terminal size={20} /> <div className="logs-section" style={{ border: 'none', background: 'transparent', padding: 0 }}>
<h2>FULL LIVE LOG STREAM</h2> <form onSubmit={handleSearch} style={{ display: 'flex', gap: '16px', marginBottom: '24px' }}>
<div className="search-container" style={{ flexGrow: 1, maxWidth: 'none' }}>
<Search className="search-icon" size={18} />
<input
type="text"
placeholder="Query logs (e.g. decky:decky-01 service:http attacker:192.168.1.5 status:failed)"
value={searchInput}
onChange={(e) => setSearchInput(e.target.value)}
/>
</div>
<select
value={timeRange}
onChange={(e) => handleTimeChange(e.target.value)}
className="search-container"
style={{ width: 'auto', color: 'var(--text-color)', cursor: 'pointer' }}
>
<option value="15m">LAST 15 MIN</option>
<option value="1h">LAST 1 HOUR</option>
<option value="24h">LAST 24 HOURS</option>
<option value="all">ALL TIME</option>
</select>
<button
type="button"
onClick={handleToggleLive}
style={{
display: 'flex', alignItems: 'center', gap: '8px',
border: `1px solid ${streaming ? 'var(--text-color)' : 'var(--accent-color)'}`,
color: streaming ? 'var(--text-color)' : 'var(--accent-color)',
minWidth: '120px', justifyContent: 'center'
}}
>
{streaming ? <><Play size={14} className="neon-blink" /> LIVE</> : <><Pause size={14} /> PAUSED</>}
</button>
</form>
</div> </div>
<div style={{ padding: '40px', textAlign: 'center', opacity: 0.5 }}>
<p>STREAM ESTABLISHED. WAITING FOR INCOMING DATA...</p> {/* Histogram Chart */}
<p style={{ marginTop: '10px', fontSize: '0.8rem' }}>(Dedicated Live Logs view placeholder)</p> <div className="logs-section" style={{ height: '200px', padding: '20px', marginBottom: '24px', minWidth: 0 }}>
<div style={{ display: 'flex', justifyContent: 'space-between', marginBottom: '10px' }}>
<div style={{ display: 'flex', alignItems: 'center', gap: '8px', fontSize: '0.7rem', color: 'var(--dim-color)' }}>
<Activity size={12} /> ATTACK VOLUME OVER TIME
</div>
<div style={{ fontSize: '0.7rem', color: 'var(--text-color)' }}>
MATCHES: {totalLogs.toLocaleString()}
</div>
</div>
<ResponsiveContainer width="100%" height="100%">
<BarChart data={histogram}>
<CartesianGrid strokeDasharray="3 3" stroke="#30363d" vertical={false} />
<XAxis
dataKey="time"
hide
/>
<YAxis
stroke="#30363d"
fontSize={10}
tickFormatter={(val) => Math.floor(val).toString()}
/>
<Tooltip
contentStyle={{ backgroundColor: '#0d1117', border: '1px solid #30363d', fontSize: '0.8rem' }}
itemStyle={{ color: 'var(--text-color)' }}
labelStyle={{ color: 'var(--dim-color)', marginBottom: '4px' }}
cursor={{ fill: 'rgba(0, 255, 65, 0.05)' }}
/>
<Bar dataKey="count" fill="var(--text-color)" radius={[2, 2, 0, 0]}>
{histogram.map((entry, index) => (
<Cell key={`cell-${index}`} fillOpacity={0.6 + (entry.count / (Math.max(...histogram.map(h => h.count)) || 1)) * 0.4} />
))}
</Bar>
</BarChart>
</ResponsiveContainer>
</div>
{/* Logs Table */}
<div className="logs-section">
<div className="section-header" style={{ display: 'flex', justifyContent: 'space-between' }}>
<div style={{ display: 'flex', alignItems: 'center', gap: '8px' }}>
<Terminal size={20} />
<h2>LOG EXPLORER</h2>
</div>
{!streaming && (
<div style={{ display: 'flex', alignItems: 'center', gap: '16px' }}>
<span className="dim" style={{ fontSize: '0.8rem' }}>
Page {page} of {Math.ceil(totalLogs / limit)}
</span>
<div style={{ display: 'flex', gap: '8px' }}>
<button
disabled={page === 1}
onClick={() => changePage(page - 1)}
style={{ padding: '4px', border: '1px solid var(--border-color)', opacity: page === 1 ? 0.3 : 1 }}
>
<ChevronLeft size={16} />
</button>
<button
disabled={page >= Math.ceil(totalLogs / limit)}
onClick={() => changePage(page + 1)}
style={{ padding: '4px', border: '1px solid var(--border-color)', opacity: page >= Math.ceil(totalLogs / limit) ? 0.3 : 1 }}
>
<ChevronRight size={16} />
</button>
</div>
</div>
)}
</div>
<div className="logs-table-container">
<table className="logs-table">
<thead>
<tr>
<th>TIMESTAMP</th>
<th>DECKY</th>
<th>SERVICE</th>
<th>ATTACKER</th>
<th>EVENT</th>
</tr>
</thead>
<tbody>
{logs.length > 0 ? logs.map(log => {
let parsedFields: Record<string, any> = {};
if (log.fields) {
try {
parsedFields = JSON.parse(log.fields);
} catch (e) {}
}
return (
<tr key={log.id}>
<td className="dim" style={{ fontSize: '0.75rem', whiteSpace: 'nowrap' }}>{new Date(log.timestamp).toLocaleString()}</td>
<td className="violet-accent">{log.decky}</td>
<td className="matrix-text">{log.service}</td>
<td>{log.attacker_ip}</td>
<td>
<div style={{ display: 'flex', flexDirection: 'column', gap: '8px' }}>
<div style={{ fontWeight: 'bold', color: 'var(--text-color)', fontSize: '0.9rem' }}>
{log.event_type} {log.msg && log.msg !== '-' && <span style={{ fontWeight: 'normal', opacity: 0.8 }}> {log.msg}</span>}
</div>
{Object.keys(parsedFields).length > 0 && (
<div style={{ display: 'flex', gap: '8px', flexWrap: 'wrap' }}>
{Object.entries(parsedFields).map(([k, v]) => (
<span key={k} style={{
fontSize: '0.7rem',
backgroundColor: 'rgba(0, 255, 65, 0.1)',
padding: '2px 8px',
borderRadius: '4px',
border: '1px solid rgba(0, 255, 65, 0.3)',
wordBreak: 'break-all'
}}>
<span style={{ opacity: 0.6 }}>{k}:</span> {typeof v === 'object' ? JSON.stringify(v) : v}
</span>
))}
</div>
)}
</div>
</td>
</tr>
);
}) : (
<tr>
<td colSpan={5} style={{ textAlign: 'center', padding: '40px', opacity: 0.5 }}>
{loading ? 'RETRIEVING DATA...' : 'NO LOGS MATCHING CRITERIA'}
</td>
</tr>
)}
</tbody>
</table>
</div>
</div> </div>
</div> </div>
); );

View File

@@ -1,3 +1,5 @@
@import url('https://fonts.googleapis.com/css2?family=Ubuntu+Mono:ital,wght@0,400;0,700;1,400;1,700&display=swap');
:root { :root {
--background-color: #000000; --background-color: #000000;
--text-color: #00ff41; --text-color: #00ff41;
@@ -15,13 +17,17 @@
} }
body { body {
font-family: 'Courier New', Courier, monospace; font-family: 'Ubuntu Mono', monospace;
background-color: var(--background-color); background-color: var(--background-color);
color: var(--text-color); color: var(--text-color);
line-height: 1.5; line-height: 1.5;
overflow-x: hidden; overflow-x: hidden;
} }
input, button, textarea, select {
font-family: inherit;
}
button { button {
cursor: pointer; cursor: pointer;
background: transparent; background: transparent;

333
development/BUG_FIXES.md Normal file
View File

@@ -0,0 +1,333 @@
# Bug Fixes — Non-Feature Realism Issues
> These are fingerprint leaks and broken protocol handlers that don't need new
> interaction design — just targeted fixes. All severity High or above from REALISM_AUDIT.md.
---
## 1. HTTP — Werkzeug header leak (High)
### Problem
Every response has two `Server:` headers:
```
Server: Werkzeug/3.1.3 Python/3.11.2
Server: Apache/2.4.54 (Debian)
```
nmap correctly IDs Apache from the second header, but any attacker that does
`curl -I` or runs Burp sees the Werkzeug leak immediately. Port 6443 (K8s) also
leaks Werkzeug in every response.
### Fix — WSGI middleware to strip/replace the header
In `templates/http/server.py` (Flask app), add a `@app.after_request` hook:
```python
@app.after_request
def _fix_server_header(response):
response.headers["Server"] = os.environ.get("HTTP_SERVER_HEADER", "Apache/2.4.54 (Debian)")
return response
```
Flask sets `Server: Werkzeug/...` by default. The `after_request` hook runs after
Werkzeug's own header injection, so it overwrites it.
Same fix applies to the K8s server if it's also Flask-based.
### Fix — Apache 403 page body
Current response body: `<h1>403 Forbidden</h1>`
Replace with the actual Apache 2.4 default 403 page:
```html
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access this resource.</p>
<hr>
<address>Apache/2.4.54 (Debian) Server at {hostname} Port 80</address>
</body></html>
```
Env var `HTTP_SERVER_HEADER` and `NODE_NAME` fill the address line.
### Env vars
| Var | Default |
|-----|---------|
| `HTTP_SERVER_HEADER` | `Apache/2.4.54 (Debian)` |
---
## 2. FTP — Twisted banner (High)
### Problem
```
220 Twisted 25.5.0 FTP Server
```
This is Twisted's built-in FTP server banner. Immediately identifies the framework.
### Fix
Override the banner. The Twisted FTP server class has a `factory.welcomeMessage` or
the protocol's `sendLine()` for the greeting. Simplest fix: subclass the protocol
and override `lineReceived` to intercept the `220` line before it goes out, OR
use a `_FTPFactory` subclass that sets `welcomeMessage`:
```python
from twisted.protocols.ftp import FTPFactory, FTPAnonymousShell
from twisted.internet import reactor
import os
NODE_NAME = os.environ.get("NODE_NAME", "ftpserver")
BANNER = os.environ.get("FTP_BANNER", f"220 (vsFTPd 3.0.3)")
factory = FTPFactory(portal)
factory.welcomeMessage = BANNER # overrides the Twisted default
```
If `FTPFactory.welcomeMessage` is not directly settable, patch it at class level:
```python
FTPFactory.welcomeMessage = BANNER
```
### Anonymous login + fake directory
The current server rejects everything after login. Fix:
- Use `FTPAnonymousShell` pointed at a `MemoryFilesystem` with fake entries:
```
/
├── backup.tar.gz (0 bytes, but listable)
├── db_dump.sql (0 bytes)
├── config.ini (0 bytes)
└── credentials.txt (0 bytes)
```
- `RETR` any file → return 13 lines of plausible fake content, then close.
- Log every `RETR` with filename and client IP.
### Env vars
| Var | Default |
|-----|---------|
| `FTP_BANNER` | `220 (vsFTPd 3.0.3)` |
---
## 3. MSSQL — Silent on TDS pre-login (High)
### Problem
No response to standard TDS pre-login packets. Connection is dropped silently.
nmap barely recognizes port 1433 (`ms-sql-s?`).
### Diagnosis
The nmap fingerprint shows `\x04\x01\x00\x2b...` which is a valid TDS 7.x pre-login
response fragment. So the server is sending _something_ — but it may be truncated or
malformed enough that nmap can't complete its probe.
Check `templates/mssql/server.py`: look for the raw bytes being sent in response to
`\x12\x01` (TDS pre-login type). Common bugs:
- Wrong packet length field (bytes 2-3 of TDS header)
- Missing `\xff` terminator on the pre-login option list
- Status byte 0x01 instead of 0x00 in the TDS header (signaling last packet)
### Correct TDS 7.x pre-login response structure
```
Byte 0: 0x04 (packet type: tabular result)
Byte 1: 0x01 (status: last packet)
Bytes 2-3: 0x00 0x2b (total length including header = 43)
Bytes 4-5: 0x00 0x00 (SPID)
Byte 6: 0x01 (packet ID)
Byte 7: 0x00 (window)
--- TDS pre-login payload ---
[VERSION] option: type=0x00, offset=0x001a, length=0x0006
[ENCRYPTION] option: type=0x01, offset=0x0020, length=0x0001
[INSTOPT] option: type=0x02, offset=0x0021, length=0x0001
[THREADID] option: type=0x03, offset=0x0022, length=0x0004
[MARS] option: type=0x04, offset=0x0026, length=0x0001
Terminator: 0xff
VERSION: 0x0e 0x00 0x07 0xd0 0x00 0x00 (14.0.2000 = SQL Server 2017)
ENCRYPTION: 0x02 (ENCRYPT_NOT_SUP)
INSTOPT: 0x00
THREADID: 0x00 0x00 0x00 0x00
MARS: 0x00
```
Verify the current implementation's bytes match this exactly. Fix the length field if off.
---
## 4. MongoDB — Silent on OP_MSG (High)
### Problem
No response to `OP_MSG isMaster` command. nmap shows `mongod?` (partial recognition).
### Diagnosis
MongoDB wire protocol since 3.6 uses `OP_MSG` (opcode 2013). Older clients use
`OP_QUERY` (opcode 2004) against `admin.$cmd`. Check which one `templates/mongodb/server.py`
handles, and whether the response's `responseTo` field matches the request's `requestID`.
Common bugs:
- Handling `OP_QUERY` but not `OP_MSG`
- Wrong `responseTo` in the response header (must echo the request's requestID)
- Missing `flagBits` field in OP_MSG response (must be 0x00000000)
### Correct OP_MSG `hello` response
```python
import struct, bson
def _op_msg_hello_response(request_id: int) -> bytes:
doc = {
"ismaster": True,
"maxBsonObjectSize": 16777216,
"maxMessageSizeBytes": 48000000,
"maxWriteBatchSize": 100000,
"localTime": {"$date": int(time.time() * 1000)},
"logicalSessionTimeoutMinutes": 30,
"connectionId": 1,
"minWireVersion": 0,
"maxWireVersion": 17,
"readOnly": False,
"ok": 1.0,
}
payload = b"\x00" + bson.encode(doc) # section type 0 = body
flag_bits = struct.pack("<I", 0)
msg_body = flag_bits + payload
# MsgHeader: totalLength(4) + requestID(4) + responseTo(4) + opCode(4)
header = struct.pack("<iiii",
16 + len(msg_body), # total length
1, # requestID (server-generated)
request_id, # responseTo: echo the client's requestID
2013, # OP_MSG
)
return header + msg_body
```
---
## 5. Redis — Empty keyspace (Medium)
### Problem
`KEYS *` returns `*0\r\n` after a successful AUTH. A real exposed Redis always has data.
Attacker does `AUTH anypassword` → `+OK` → `KEYS *` → empty → leaves.
### Fix — fake key-value store
Add a module-level dict with bait data. Handle `KEYS`, `GET`, `SCAN`, `TYPE`, `TTL`:
```python
_FAKE_STORE = {
b"sessions:user:1234": b'{"id":1234,"user":"admin","token":"eyJhbGciOiJIUzI1NiJ9..."}',
b"sessions:user:5678": b'{"id":5678,"user":"alice","token":"eyJhbGciOiJIUzI1NiJ9..."}',
b"cache:api_key": b"sk_live_9mK3xF2aP7qR1bN8cT4dW6vE0yU5hJ",
b"jwt:secret": b"super_secret_jwt_signing_key_do_not_share_2024",
b"user:admin": b'{"username":"admin","password":"$2b$12$LQv3c1yqBWVHxkd0LHAkC.","role":"superadmin"}',
b"user:alice": b'{"username":"alice","password":"$2b$12$XKLDm3vT8nPqR4sY2hE6fO","role":"user"}',
b"config:db_password": b"Pr0dDB!2024#Secure",
b"config:aws_access_key": b"AKIAIOSFODNN7EXAMPLE",
b"config:aws_secret_key": b"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
b"rate_limit:192.168.1.1": b"42",
}
```
Commands to handle:
- `KEYS *` → all keys as RESP array
- `KEYS pattern` → filtered (basic glob: `*` matches all, `prefix:*` matches prefix)
- `GET key` → value or `$-1` (nil)
- `SCAN 0` → `*2\r\n$1\r\n0\r\n` + keys array (cursor always 0, return all)
- `TYPE key` → `+string\r\n`
- `TTL key` → `:-1\r\n` (no expiry)
---
## 6. SIP — Hardcoded nonce (Low)
### Problem
`nonce="decnet0000"` is hardcoded. A Shodan signature could detect this string.
### Fix
```python
import secrets
nonce = secrets.token_hex(16) # e.g. "a3f8c1b2e7d94051..."
```
Generate once per connection in `connection_made`. The WWW-Authenticate header
becomes: `Digest realm="{NODE_NAME}", nonce="{nonce}", algorithm=MD5`
---
## 7. VNC — Hardcoded DES challenge (Low)
### Problem
The 16-byte DES challenge sent during VNC auth negotiation is static.
### Fix
```python
import os
self._vnc_challenge = os.urandom(16)
```
Generate in `connection_made`. Send `self._vnc_challenge` in the Security handshake.
---
## 8. PostgreSQL — Hardcoded salt (Low)
### Problem
`AuthenticationMD5Password` response contains `\xde\xad\xbe\xef` as the 4-byte salt.
### Fix
```python
import os
self._pg_salt = os.urandom(4)
```
Use `self._pg_salt` in the `R\x00\x00\x00\x0c\x00\x00\x00\x05` response bytes.
---
## Files to change
| File | Change |
|------|--------|
| `templates/http/server.py` | `after_request` header fix, proper 403 body |
| `templates/ftp/server.py` | Banner override, anonymous login, fake dir |
| `templates/mssql/server.py` | Fix TDS pre-login response bytes |
| `templates/mongodb/server.py` | Add OP_MSG handler, fix responseTo |
| `templates/redis/server.py` | Add fake key-value store, KEYS/GET/SCAN |
| `templates/sip/server.py` | Random nonce per connection |
| `templates/vnc/server.py` | Random DES challenge per connection |
| `templates/postgres/server.py` | Random MD5 salt per connection |
| `tests/test_http_headers.py` | New: assert single Server header, correct 403 body |
| `tests/test_redis.py` | Extend: KEYS *, GET, SCAN return bait data |
---
## Priority order
1. HTTP header leak — immediately visible to any attacker
2. FTP banner — immediate framework disclosure
3. MSSQL silent — service appears dead
4. MongoDB silent — service appears dead
5. Redis empty keyspace — breaks the bait value proposition
6. SIP/VNC/PostgreSQL hardcoded values — low risk, quick wins

View File

@@ -1,5 +1,48 @@
# DECNET Development Roadmap # DECNET Development Roadmap
## 🛠️ Service Realism & Interaction (First Release Path)
*Goal: Ensure every service is interactive enough to feel real during manual exploration.*
### Remote Access & Shells
- [ ] **SSH (Cowrie)** — Custom filesystem, realistic user database, and command execution.
- [ ] **Telnet (Cowrie)** — Realistic banner and command emulation.
- [ ] **RDP** — Realistic NLA authentication and screen capture (where possible).
- [ ] **VNC** — Realistic RFB protocol handshake and authentication.
- [ ] **Real SSH** — Pass-through or high-interaction proxying.
### Databases
- [ ] **MySQL** — Support for common SQL queries and realistic schema.
- [ ] **Postgres** — Realistic version strings and basic query support.
- [ ] **MSSQL** — Realistic TDS protocol handshake.
- [ ] **MongoDB** — Support for common Mongo wire protocol commands.
- [x] **Redis** — Support for basic GET/SET/INFO commands.
- [ ] **Elasticsearch** — Realistic REST API responses for `/_cluster/health` etc.
### Web & APIs
- [x] **HTTP** — Flexible templates (WordPress, phpMyAdmin, etc.) with logging.
- [ ] **Docker API** — Realistic responses for `docker version` and `docker ps`.
- [ ] **Kubernetes (K8s)** — Mocked kubectl responses and basic API exploration.
### File Transfer & Storage
- [ ] **SMB** — Realistic share discovery and basic file browsing.
- [x] **FTP** — Support for common FTP commands and directory listing.
- [ ] **TFTP** — Basic block-based file transfer emulation.
### Directory & Mail
- [ ] **LDAP** — Basic directory search and authentication responses.
- [x] **SMTP** — Mail server banners and basic EHLO/MAIL FROM support.
- [x] **IMAP** — Realistic mail folder structure and auth.
- [x] **POP3** — Basic mail retrieval protocol emulation.
### Industrial & IoT (ICS)
- [x] **MQTT** — Basic topic subscription and publishing support.
- [x] **SNMP** — Realistic MIB responses for common OIDs.
- [ ] **SIP** — Basic VoIP protocol handshake and registration.
- [ ] **LLMNR** — Realistic local name resolution responses.
- [x] **Conpot** — SCADA/ICS protocol emulation (Modbus, etc.).
---
## Core / Hardening ## Core / Hardening
- [ ] **Attacker fingerprinting** — Capture TLS JA3/JA4 hashes, TCP window sizes, User-Agent strings, and SSH client banners. - [ ] **Attacker fingerprinting** — Capture TLS JA3/JA4 hashes, TCP window sizes, User-Agent strings, and SSH client banners.

63
development/FUTURE.md Normal file
View File

@@ -0,0 +1,63 @@
# DECNET — Future Concepts & Architecture
This document tracks long-term, visionary architectural concepts and ideas that are outside the scope of the 1.0 roadmap, but represent the ultimate end-state of the DECNET framework.
## The Honeymaze: Spider Network Topology
### Concept Overview
As attackers breach the perimeter, instead of just lateral movement on a flat `/24` or massive VXLAN, DECNET can dynamically generate an infinite "daisy-chain" of isolated Docker networks. This forces the attacker to establish deep, nested C2 proxy chains (SOCKS, chisel, SSH tunnels) to pivot from machine to machine.
For example:
- `decky-01` sits on the main LAN via `eth0` (MACVLAN). It also has `eth1`, which belongs to `docker-bridge-1`.
- `decky-02` sits exclusively on `docker-bridge-1` as its `eth0`. It also has `eth1`, belonging to `docker-bridge-2`.
- `decky-03` sits exclusively on `docker-bridge-2`.
### Strategic Value
1. **High-Fidelity TTP Telemetry**: By forcing the attacker into a corner where they *must* deploy pivot infrastructure, we capture extremely high-value indicators of compromise regarding their proxy tooling and network tradecraft.
2. **Infinite Time Sinks**: An attacker can spend weeks navigating simulated air-gaps and deep corporate enclaves feeling a false sense of progression.
### Execution & Realism Restrictions
To prevent the topology from feeling artificial or obviously simulated:
1. **Asymmetric Nesting**: A strict 1:1 nested daisy chain is a dead giveaway. Real corporate networks branch organically.
- Some machines should be terminal endpoints (no nested subnets).
- Some machines acts as jump hosts bridging two large local arrays.
- The depth and horizontal fan-out per subnet must be randomized to emulate realistic DMZ $\rightarrow$ Internal $\rightarrow$ OT enclave architectures.
2. **Variable Sizing**: Subnets must contain a random number of containers. An internal enclave might have 50 flat machines, and only *one* of them acts as the bridge to the next isolated segment.
### The Logging Paradox Solved
Deeply nested, air-gapped machines present a logging challenge: if `decky-50` has no route to the internet or the logging network, how can it forward telemetry stealthily?
**Solution**: DECNET completely bypasses the container networking stack by relying purely on Docker's native `stdout` and daemon-level logging drivers. Because the host daemon handles the extraction, the attacker can completely destroy the container's virtual interfaces or be 50 layers deep in an air-gap without ever noticing a magic route, and the telemetry will still perfectly reach the SIEM out-of-band.
### Simulated Topographical Latency
If an attacker proxies 5 subnets deep into what is supposed to be a secure, physically segmented enclave, and `ping` returns a flat `0.05ms` response time, they will instantly realize it's a local simulation on a single host.
To maintain the illusion of depth, DECNET can utilize the **Linux Traffic Control (`tc`)** subsystem and its **Network Emulator (`netem`)** module on the virtual bridge interfaces (`veth` pairs).
By procedurally generating `tc` rules as the network scales, we can inject mathematical latency penalties per hop:
```bash
# Example: Add 45ms latency, +/- 10ms jitter on a normal curve, with 0.1% packet loss
tc qdisc add dev eth1 root netem delay 45ms 10ms distribution normal loss 0.1%
```
As an attacker pivots deeper into the "Spider Network," this injected latency compounds automatically. A proxy chain going 4 levels deep would realistically suffer from 150ms+ of latency and erratic jitter, perfectly mimicking the experience of routing over slow, multi-site corporate VPNs.
---
## Distributed Scale: Swarm Overlay Architecture
To scale DECNET across multiple physical racks or sites, DECNET can leverage **Docker Swarm Overlay Networks** to create a unified L2/L3 backbone without surrendering control to Swarm's "Orchestration" scheduler.
### The `--attachable` Paradigm
By default, Docker's `overlay` driver requires Swarm mode but tightly couples it to `docker service` (which abstracts and randomizes container placement to balance loads). In honeypot deployments, absolute control over physical placement is critical (e.g., placing the `scada-archetype` explicitly on bare-metal node C in the DMZ).
To solve this, DECNET will initialize the swarm control plane simply to construct the backend VXLAN, but completely ignore the service scheduler in favor of `--attachable` networks:
1. **Initialize the Control Plane** (manager node + remote worker joins):
```bash
docker swarm init
```
2. **Create the Attachable Backbone**:
```bash
docker network create -d overlay --attachable decnet-backbone
```
3. **Deploy Standalone**: Keep relying entirely on local `decnet deploy` scripts on the individual physical nodes. Because the network is `attachable`, standalone container instances can seamlessly attach to it and communicate with containers running on completely different hardware across the globe as if they were on a local layer 2 switch!

248
development/HARDENING.md Normal file
View File

@@ -0,0 +1,248 @@
# OS Fingerprint Spoofing — Hardening Roadmap
This document describes the current state of OS fingerprint spoofing in DECNET
and the planned improvements to make `nmap -O`, `p0f`, and similar passive/active
scanners see the intended OS rather than a generic Linux kernel.
---
## Current State (Post-Phase 1)
Phase 1 is **implemented and tested against live scans**. Each archetype declares
an `nmap_os` slug (e.g. `"windows"`, `"linux"`, `"embedded"`). The **composer**
resolves that slug via `os_fingerprint.get_os_sysctls()` and injects the resulting
kernel parameters into the **base container** as Docker `sysctls`. Service
containers inherit the same network namespace via `network_mode: "service:<base>"`
and therefore appear identical to outside scanners.
### Implemented sysctls (8 per OS profile)
| Sysctl | Purpose | Win | Linux | Embedded |
|---|---|---|---|---|
| `net.ipv4.ip_default_ttl` | TTL discriminator | `128` | `64` | `255` |
| `net.ipv4.tcp_syn_retries` | SYN retransmit count | `2` | `6` | `3` |
| `net.ipv4.tcp_timestamps` | TCP timestamp option (OPS probes) | `0` | `1` | `0` |
| `net.ipv4.tcp_window_scaling` | Window scale option | `1` | `1` | `0` |
| `net.ipv4.tcp_sack` | Selective ACK option | `1` | `1` | `0` |
| `net.ipv4.tcp_ecn` | ECN negotiation | `0` | `2` | `0` |
| `net.ipv4.ip_no_pmtu_disc` | DF bit in ICMP replies | `0` | `0` | `1` |
| `net.ipv4.tcp_fin_timeout` | FIN_WAIT_2 timeout (seconds) | `30` | `60` | `15` |
### Live scan results (Windows decky, 2026-04-10)
**What works:**
| nmap field | Expected | Got | Status |
|---|---|---|---|
| TTL (`T=`) | `80` (128 dec) | `T=80` | ✅ |
| TCP timestamps (`TS=`) | `U` (unsupported) | `TS=U` | ✅ |
| ECN (`CC=`) | `N` | `CC=N` | ✅ |
| TCP window (`W1=`) | `FAF0` (64240) | `W1=FAF0` | ✅ |
| Window options (`O1=`) | `M5B4NNSNWA` | `O1=M5B4NNSNWA` | ✅ |
| SACK | present | present | ✅ |
| DF bit | `DF=Y` | `DF=Y` | ✅ |
**What fails:**
| nmap field | Expected (Win) | Got | Impact |
|---|---|---|---|
| IP ID (`TI=`) | `I` (incremental) | `Z` (all zeros) | **Critical** — no Windows fingerprint in nmap's DB has `TI=Z`. This alone causes 91% confidence "Linux 2.4/2.6 embedded" |
| ICMP rate limiting | unlimited | Linux default rate | Minor — affects `IE`/`U1` probe groups |
**Key finding:** `TI=Z` is the **single remaining blocker** for a convincing
Windows fingerprint. Everything else (TTL, window, timestamps, ECN, SACK, DF)
is already correct. The Phase 2 window mangling originally planned is
**unnecessary** — the kernel already produces the correct 64240 value.
---
## Remaining Improvement Phases
### Phase 2 — ICMP Tuning via Sysctls (Low effort, Medium impact)
Two additional namespace-scoped sysctls control ICMP error rate limiting.
nmap's `IE` and `U1` probe groups measure how quickly the target responds to
ICMP and UDP-to-closed-port probes.
**Changes required:** add to `OS_SYSCTLS` in `decnet/os_fingerprint.py`.
| Sysctl | What it controls | Windows | Linux | Embedded |
|---|---|---|---|---|
| `net.ipv4.icmp_ratelimit` | Minimum ms between ICMP error messages | `0` (none) | `1000` (1/sec) | `1000` |
| `net.ipv4.icmp_ratemask` | Bitmask of ICMP types subject to rate limiting | `0` | `6168` | `6168` |
**Why:** Windows does not rate-limit ICMP error responses. Linux defaults to
1000ms between ICMP errors (effectively 1 per second per destination). When
nmap sends rapid-fire UDP probes to closed ports, a Windows machine replies to
all of them instantly while a Linux machine throttles responses. Setting
`icmp_ratelimit=0` for Windows makes the `U1` probe response timing match.
**Estimated effort:** 15 min — same pattern as Phase 1, just two more entries.
---
### Phase 3 — NFQUEUE IP ID Rewriting (Medium effort, Very high impact)
This is the **highest-priority remaining item** and the only way to fix `TI=Z`.
#### Root cause of `TI=Z`
The Linux kernel's `ip_select_ident()` function sets the IP Identification
field to `0` for all TCP packets where DF=1 (don't-fragment bit set). This is
correct behavior per RFC 6864 ("IP ID is meaningless when DF=1") but no Windows
fingerprint in nmap's database has `TI=Z`. **No namespace-scoped sysctl can
change this** — it's hardcoded in the kernel's TCP stack.
Note: `ip_no_pmtu_disc` does NOT fix this. That sysctl controls Path MTU
Discovery for UDP/ICMP paths only, not TCP IP ID generation. Setting it to 1
for Windows was tested and confirmed to have no effect on `TI=Z`.
#### Solution: NFQUEUE userspace packet rewriting
Use `iptables -t mangle` to send outgoing TCP packets to an NFQUEUE, where a
small Python daemon rewrites the IP ID field before release.
```
┌──────────────────────────┐
TCP SYN-ACK ───► │ iptables mangle/OUTPUT │
│ -j NFQUEUE --queue-num 0 │
└───────────┬──────────────┘
┌──────────────────────────┐
│ Python NFQUEUE daemon │
│ 1. Read IP ID field │
│ 2. Replace with target │
│ pattern (sequential │
│ for Windows, zero │
│ for embedded, etc.) │
│ 3. Recalculate checksum │
│ 4. Accept packet │
└───────────┬──────────────┘
Packet goes out
```
**Target IP ID patterns by OS:**
| OS | nmap label | Pattern | Implementation |
|---|---|---|---|
| Windows | `TI=I` | Sequential, incrementing by 1 per packet | Global atomic counter |
| Linux 3.x+ | `TI=Z` | Zero (DF=1) or randomized | Leave untouched (already correct) |
| Embedded/Cisco | `TI=I` or `TI=Z` | Varies by device | Sequential or zero |
| BSD | `TI=RI` | Randomized incremental | Counter + small random delta |
**Two possible approaches:**
1. **TCPOPTSTRIP + NFQUEUE (comprehensive)**
- `TCPOPTSTRIP` can strip/modify TCP options (window scale, SACK, etc.)
via pure iptables rules, no userspace needed
- `NFQUEUE` handles IP-layer rewriting (IP ID) in userspace
- Combined: full control over the TCP/IP fingerprint
2. **NFQUEUE only (simpler)**
- Single Python daemon handles everything: IP ID rewriting, and optionally
TCP option/window manipulation if ever needed
- Fewer moving parts, one daemon to monitor
**Required changes:**
- `templates/base/Dockerfile` — new, installs `iptables` + `python3-netfilterqueue`
- `templates/base/entrypoint.sh` — new, sets up iptables rules + launches daemon
- `templates/base/nfq_spoofer.py` — new, the NFQUEUE packet rewriting daemon
- `os_fingerprint.py` — add `ip_id_pattern` field to each OS profile
- `composer.py` — pass `SPOOF_IP_ID` env var + use `templates/base/Dockerfile`
instead of bare distro images for base containers
**Dependencies on the host kernel:**
- `nfnetlink_queue` module (`modprobe nfnetlink_queue`)
- `xt_NFQUEUE` module (standard in all distro kernels)
- `NET_ADMIN` capability (already granted)
**Dependencies in the base container image:**
- `iptables` package
- `python3` + `python3-netfilterqueue` (or `scapy` with `NetfilterQueue`)
**Estimated effort:** 46 hours + tests
---
### Phase 4 — Full Fingerprint Database Matching (Hard, Low marginal impact)
After Phases 23, the remaining fingerprint differences are increasingly minor:
| Signal | Current | Notes |
|---|---|---|
| TCP initial sequence number (ISN) pattern (`SP=`, `ISR=`) | Linux kernel default | Kernel-level, not spoofable without userspace TCP |
| TCP window variance across probes | Constant (`FAF0` × 6) | Real Windows sometimes varies slightly |
| T2/T3 responses | `R=N` (no response) | Correct for some Windows, wrong for others |
| ICMP data payload echo | Linux default | Difficult to control per-namespace |
These are diminishing returns. With Phases 13 complete, `nmap -O` should
correctly identify the OS family in >90% of scans.
> Phase 4 is **not recommended** for the near term. Effort is measured in days
> for single-digit percentage improvements.
---
## Implementation Priority (revised)
```
Phase 1 ✅ DONE ─────────────────────────────
└─ 8 sysctls per OS in os_fingerprint.py
└─ Verified: TTL, window, timestamps, ECN, SACK all correct
Phase 2 ──────────────────────────────── (implement next)
└─ 2 more sysctls: icmp_ratelimit + icmp_ratemask
└─ Estimated effort: 15 min
Phase 3 ──────────────────────────────── (high priority)
└─ NFQUEUE daemon in templates/base/
└─ Fix TI=Z for Windows (THE remaining blocker)
└─ Estimated effort: 46 hours + tests
Phase 4 ──────────────────────────────── (not recommended)
└─ ISN pattern, T2/T3, ICMP payload echo
└─ Estimated effort: days, diminishing returns
```
---
## Testing Strategy
After each phase, validate with:
```bash
# Active OS fingerprint scan against a deployed decky
sudo nmap -O --osscan-guess <decky_ip>
# Aggressive scan with version detection
sudo nmap -sV -O -A --osscan-guess <decky_ip>
# Passive fingerprinting (run on host while generating traffic to decky)
sudo p0f -i <macvlan_interface> -p
# Quick TTL + window check
hping3 -S -p 445 <decky_ip> # inspect TTL and window in reply
# Test INI (all OS families, 10 deckies)
sudo .venv/bin/decnet deploy --config arche-test.ini --interface eth0
```
### Expected outcomes by phase
| Check | Pre-Phase 1 | Post-Phase 1 ✅ | Post-Phase 2 | Post-Phase 3 |
|---|---|---|---|---|
| TTL | ✅ | ✅ | ✅ | ✅ |
| TCP timestamps | ❌ | ✅ | ✅ | ✅ |
| TCP window size | ❌ | ✅ (kernel default OK) | ✅ | ✅ |
| ECN | ❌ | ✅ | ✅ | ✅ |
| ICMP rate limiting | ❌ | ❌ | ✅ | ✅ |
| IP ID sequence (`TI=`) | ❌ | ❌ | ❌ | ✅ |
| `nmap -O` family match | ⚠️ | ⚠️ (TI=Z blocks) | ⚠️ | ✅ |
| `p0f` match | ⚠️ | ⚠️ | ✅ | ✅ |
### Note on `P=` field in nmap output
The `P=x86_64-redhat-linux-gnu` that appears in the `SCAN(...)` block is the
**GNU build triple of the nmap binary itself**, not a fingerprint of the target.
It cannot be changed and is not relevant to OS spoofing.

232
development/ICS_SCADA.md Normal file
View File

@@ -0,0 +1,232 @@
# ICS/SCADA Bait — Plan
> Scenario: attacker finds MQTT broker on a water treatment plant, subscribes to
> sensor topics, publishes commands trying to "open the valve" or "disable chlorination".
---
## Services in scope
| Service | Port | Current state | Target state |
|---------|------|--------------|-------------|
| MQTT | 1883 | CONNACK 0x05 (reject all) | CONNACK 0x00, fake sensor topics |
| SNMP | 161/UDP | Functional, generic sysDescr | sysDescr tuned per archetype |
| Conpot | 502 | Not responding | Investigate + fix port mapping |
---
## MQTT — water plant persona
### Current behavior
Every CONNECT gets `CONNACK 0x05` (Not Authorized) and the connection is closed.
An ICS attacker immediately moves on — there's nothing to interact with.
### Target behavior
Accept all connections (`CONNACK 0x00`). Publish retained sensor data on
realistic SCADA topics. Log every PUBLISH command (attacker trying to control plant).
### Topic tree
```
plant/water/tank1/level → "73.4" (percent full)
plant/water/tank1/pressure → "2.81" (bar)
plant/water/pump1/status → "RUNNING"
plant/water/pump1/rpm → "1420"
plant/water/pump2/status → "STANDBY"
plant/water/chlorine/dosing → "1.2" (mg/L)
plant/water/chlorine/residual → "0.8" (mg/L)
plant/water/valve/inlet/state → "OPEN"
plant/water/valve/drain/state → "CLOSED"
plant/alarm/high_pressure → "0"
plant/alarm/low_chlorine → "0"
plant/alarm/pump_fault → "0"
plant/$SYS/broker/version → "Mosquitto 2.0.15"
plant/$SYS/broker/uptime → "2847392 seconds"
```
All topics have `retain=True` so subscribers immediately receive the last value.
### Protocol changes needed
Add handling for:
- **SUBSCRIBE (pkt_type=8)**: Parse topic filter + QoS pairs. For each matching topic,
send SUBACK then immediately send a PUBLISH with the retained value.
- **PUBLISH (pkt_type=3)**: Log the topic + payload (this is the attacker "sending a command").
Return PUBACK for QoS 1. Do NOT update the retained value (the plant ignores the command).
- **PINGREQ (pkt_type=12)**: Already handled. Keep alive.
- **DISCONNECT (pkt_type=14)**: Close cleanly.
Do NOT implement: UNSUBSCRIBE, QoS 2. Return SUBACK with QoS 1 for all subscriptions.
### CONNACK change
```python
_CONNACK_ACCEPTED = b"\x20\x02\x00\x00" # session_present=0, return_code=0
```
### Env vars
| Var | Default | Description |
|-----|---------|-------------|
| `MQTT_PERSONA` | `water_plant` | Topic tree preset |
| `MQTT_ACCEPT_ALL` | `1` | Accept all connections |
| `NODE_NAME` | `mqtt-broker` | Hostname in logs |
---
## SUBSCRIBE packet parsing
```python
def _parse_subscribe(payload: bytes):
"""Returns (packet_id, [(topic, qos), ...])"""
pos = 0
packet_id = struct.unpack(">H", payload[pos:pos+2])[0]
pos += 2
topics = []
while pos < len(payload):
topic, pos = _read_utf8(payload, pos)
qos = payload[pos] & 0x03
pos += 1
topics.append((topic, qos))
return packet_id, topics
```
### SUBACK
```python
def _suback(packet_id: int, granted_qos: list[int]) -> bytes:
payload = struct.pack(">H", packet_id) + bytes(granted_qos)
return bytes([0x90, len(payload)]) + payload
```
### PUBLISH (server → client, retained)
```python
def _publish(topic: str, value: str, retain: bool = True) -> bytes:
topic_bytes = topic.encode()
topic_len = struct.pack(">H", len(topic_bytes))
payload = value.encode()
# Fixed header: type=3, retain flag, no QoS (fire and forget for retained)
fixed = 0x31 if retain else 0x30
remaining = len(topic_len) + len(topic_bytes) + len(payload)
return bytes([fixed, remaining]) + topic_len + topic_bytes + payload
```
---
## SNMP — sysDescr per archetype
Current `sysDescr` is a generic Linux string. It should reflect the decky's persona.
### Archetype strings
| Archetype | sysDescr |
|-----------|---------|
| water_plant | `Linux scada-plc01 4.19.0-18-amd64 #1 SMP Debian 4.19.208-1 (2021-09-29) x86_64` |
| factory | `VxWorks 6.9 (Rockwell Automation Allen-Bradley ControlLogix 5580)` |
| substation | `SEL Real-Time Automation Controller RTAC SEL-3555 firmware 1.9.7.0` |
| hospital | `Linux medlogic-srv01 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 x86_64` |
| default | `Linux decky-host 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023 x86_64` |
Env var `SNMP_ARCHETYPE` selects the string. The SNMP server should also tune:
- `sysContact.0``ICS Admin <ics-admin@plant.local>`
- `sysLocation.0``Water Treatment Facility — Pump Room B`
- `sysName.0``scada-plc01` (from `NODE_NAME`)
---
## Conpot — Modbus TCP (port 502)
### Current state
Port 502 shows `CLOSED` in nmap. Conpot is deployed as a service container but
is either not binding to 502 or the port mapping is wrong.
### Diagnosis steps
1. Check the compose fragment: `decnet services conpot` — what port does it expose?
2. `docker exec decky-01-conpot netstat -tlnp` or `ss -tlnp` — is Conpot listening on 502?
3. Check Conpot's default config — it may listen on a non-standard port (e.g. 5020) and
expect a host-level iptables REDIRECT rule to map 502 → 5020.
### Fix options
**Option A** (preferred): Configure Conpot to listen directly on 502 by editing its
`default.xml` template and setting `<port>502</port>`.
**Option B**: Add `iptables -t nat -A PREROUTING -p tcp --dport 502 -j REDIRECT --to-port 5020`
to the base container entrypoint. Fragile — prefer A.
### What Modbus should respond
Conpot's default Modbus template already implements a plausible PLC. The key registers
to tune for water-plant persona:
| Register | Address | Value | Description |
|----------|---------|-------|-------------|
| Coil | 0 | 1 | Pump 1 running |
| Coil | 1 | 0 | Pump 2 standby |
| Holding | 0 | 734 | Tank level (73.4%) |
| Holding | 1 | 281 | Pressure (2.81 bar × 100) |
| Holding | 2 | 12 | Chlorine dosing (1.2 mg/L × 10) |
These values should be consistent with the MQTT topic tree so an attacker who
probes both sees a coherent picture.
---
## Log events
### MQTT
| event_type | Fields | Trigger |
|------------|--------|---------|
| `connect` | src, src_port, client_id, username | CONNECT packet |
| `subscribe` | src, topics | SUBSCRIBE packet |
| `publish` | src, topic, payload | PUBLISH from client (attacker command!) |
| `disconnect` | src | DISCONNECT or connection lost |
### SNMP
No changes to event structure — sysDescr is just a config string.
---
## Files to change
| File | Change |
|------|--------|
| `templates/mqtt/server.py` | Accept connections, SUBSCRIBE handler, retained PUBLISH, PUBLISH log |
| `templates/snmp/server.py` | Add `SNMP_ARCHETYPE` env var, tune sysDescr/sysContact/sysLocation |
| `templates/conpot/` | Investigate port config, fix 502 binding |
| `tests/test_mqtt.py` | New: connect accepted, subscribe → retained publish, attacker publish logged |
| `tests/test_snmp.py` | Extend: sysDescr per archetype |
---
## Verification against live decky
```bash
# MQTT: connect and subscribe
mosquitto_sub -h 192.168.1.200 -t "plant/#" -v
# Expected output:
# plant/water/tank1/level 73.4
# plant/water/pump1/status RUNNING
# ...
# MQTT: attacker sends a command (should be logged)
mosquitto_pub -h 192.168.1.200 -t "plant/water/valve/inlet/state" -m "CLOSED"
# Modbus: read coil 0 (pump status)
# (requires mbpoll or similar)
mbpoll -a 1 -r 1 -c 2 192.168.1.200
# SNMP: sysDescr check
snmpget -v2c -c public 192.168.1.200 1.3.6.1.2.1.1.1.0
# Expected: STRING: "Linux scada-plc01 4.19.0..."
```

220
development/IMAP_BAIT.md Normal file
View File

@@ -0,0 +1,220 @@
# IMAP Bait Mailboxes — Plan
> Scenario: attacker credential-stuffs IMAP, logs in as `admin`/`admin`,
> browses mail, finds juicy internal communications and credential leaks.
---
## Current state
Both IMAP and POP3 reject **all** credentials with a hard-coded failure.
No mailbox commands are implemented. An attacker that successfully guesses
credentials (which they can't, ever) would have nothing to read anyway.
This is the biggest missed opportunity in the whole stack.
---
## Design
### Credential policy
Accept a configurable set of username/password pairs. Defaults baked into
the image — typical attacker wordlist winners:
```
admin / admin
admin / password
admin / 123456
root / root
mail / mail
user / user
```
Env var override: `IMAP_USERS=admin:admin,root:toor,user:letmein`
Wrong credentials → `NO [AUTHENTICATIONFAILED] Invalid credentials` (log the attempt).
Right credentials → `OK` + full session.
### Fake mailboxes
One static mailbox tree, same for all users (honeypot doesn't need per-user isolation):
```
INBOX (12 messages)
Sent (8 messages)
Drafts (1 message)
Archive (3 messages)
```
### Bait email content
Bait emails are seeded at startup from a `MAIL_SEED` list embedded in the server.
Content is designed to reward the attacker for staying in the session:
**INBOX messages (selected)**
| # | From | Subject | Bait payload |
|---|------|---------|-------------|
| 1 | devops@company.internal | AWS credentials rotation | `AKIAIOSFODNN7EXAMPLE` / `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY` |
| 2 | monitoring@company.internal | DB password changed | `mysql://admin:Sup3rS3cr3t!@10.0.1.5:3306/production` |
| 3 | noreply@github.com | Your personal access token | `ghp_16C7e42F292c6912E7710c838347Ae178B4a` |
| 4 | admin@company.internal | VPN config attached | `vpn.company.internal:1194 user=vpnadmin pass=VpnP@ss2024` |
| 5 | sysadmin@company.internal | Root password | New root pw: `r00tM3T00!` — change after first login |
| 6 | backup@company.internal | Backup job failed | Backup to `192.168.1.50:/mnt/nas` — credentials in /etc/backup.conf |
| 7 | alerts@company.internal | SSH brute-force alert | 47 attempts from 185.220.101.x against root — all blocked |
**Sent messages**
| # | To | Subject | Bait payload |
|---|-----|---------|-------------|
| 1 | vendor@external.com | API credentials | API key: `sk_live_xK3mF2...9aP` |
| 2 | helpdesk@company.internal | Need access reset | My password is `Winter2024!` — please reset MFA |
**Drafts**
| # | Subject | Bait payload |
|---|---------|-------------|
| 1 | DO NOT SEND - k8s secrets | `kubectl get secret admin-token -n kube-system -o yaml` output pasted in |
---
## Protocol implementation
### IMAP4rev1 commands to implement
```
CAPABILITY → * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN
LOGIN → authenticate or reject
SELECT → select INBOX / Sent / Drafts / Archive
LIST → return folder tree
LSUB → same as LIST (subscribed)
STATUS → return EXISTS / RECENT / UNSEEN for a mailbox
FETCH → return message headers or full body
UID FETCH → same with UID addressing
SEARCH → stub: return all UIDs (we don't need real search)
EXAMINE → read-only SELECT
CLOSE → deselect current mailbox
LOGOUT → BYE + OK
NOOP → OK
```
Commands NOT needed (return `BAD`): `STORE`, `COPY`, `APPEND`, `EXPUNGE`.
Attackers rarely run these. Logging `BAD` is fine if they do.
### Banner
Change from:
```
* OK [omega-decky] IMAP4rev1 Service Ready
```
To:
```
* OK Dovecot ready.
```
nmap currently says "(unrecognized)". Dovecot banner makes it ID correctly.
### CAPABILITY advertisement
```
* CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN
```
### SELECT response
```
* 12 EXISTS
* 0 RECENT
* OK [UNSEEN 7] Message 7 is first unseen
* OK [UIDVALIDITY 1712345678] UIDs valid
* OK [UIDNEXT 13] Predicted next UID
* FLAGS (\Answered \Flagged \Deleted \Seen \Draft)
* OK [PERMANENTFLAGS (\Deleted \Seen \*)] Limited
A3 OK [READ-WRITE] SELECT completed
```
### FETCH envelope/body
Message structs stored as Python dataclasses at startup. `FETCH 1:* (FLAGS ENVELOPE)` returns
envelope tuples in RFC 3501 format. `FETCH N BODY[]` returns the raw RFC 2822 message.
---
## POP3 parity
POP3 is much simpler. Same credential list. After successful PASS:
```
STAT → +OK 12 48000 (12 messages, total ~48 KB)
LIST → +OK 12 messages\r\n1 3912\r\n2 2048\r\n...\r\n.
RETR N → +OK <size>\r\n<raw message>\r\n.
TOP N L → +OK\r\n<first L body lines>\r\n.
UIDL → +OK\r\n1 <uid>\r\n...\r\n.
DELE N → +OK Message deleted (just log it, don't actually remove)
CAPA → +OK\r\nTOP\r\nUSER\r\nUIDL\r\nRESP-CODES\r\nAUTH-RESP-CODE\r\nSASL\r\n.
```
---
## State machine (IMAP)
```
NOT_AUTHENTICATED
→ LOGIN success → AUTHENTICATED
→ LOGIN fail → NOT_AUTHENTICATED (log, stay open for retries)
AUTHENTICATED
→ SELECT / EXAMINE → SELECTED
→ LIST / LSUB / STATUS / LOGOUT / NOOP → stay AUTHENTICATED
SELECTED
→ FETCH / UID FETCH / SEARCH / EXAMINE / SELECT → stay SELECTED
→ CLOSE / LOGOUT → AUTHENTICATED or closed
```
---
## Files to change
| File | Change |
|------|--------|
| `templates/imap/server.py` | Full rewrite: state machine, credential check, mailbox commands, bait emails |
| `templates/pop3/server.py` | Extend: credential check, STAT/LIST/RETR/UIDL/TOP/DELE/CAPA |
| `tests/test_imap.py` | New: login flow, SELECT, FETCH, bad creds, all mailboxes |
| `tests/test_pop3.py` | New: login flow, STAT, LIST, RETR, CAPA |
---
## Implementation notes
- All bait emails are hardcoded Python strings — no files to load, no I/O.
- Use a module-level `MESSAGES: list[dict]` list with fields: `uid`, `flags`, `size`, `date`,
`from_`, `to`, `subject`, `body` (full RFC 2822 string).
- `_format_envelope()` builds the IMAP ENVELOPE tuple string from the message dict.
- Thread safety: all state per-connection in the Protocol class. No shared mutable state.
---
## Env vars
| Var | Default | Description |
|-----|---------|-------------|
| `IMAP_USERS` | `admin:admin,root:root,mail:mail` | Accepted credentials (user:pass,...) |
| `IMAP_BANNER` | `* OK Dovecot ready.` | Greeting line |
| `NODE_NAME` | `mailserver` | Hostname in responses |
---
## Verification against live decky
```bash
# Credential test (should accept)
printf "A1 LOGIN admin admin\r\nA2 SELECT INBOX\r\nA3 FETCH 1:3 (FLAGS ENVELOPE)\r\nA4 FETCH 5 BODY[]\r\nA5 LOGOUT\r\n" | nc 192.168.1.200 143
# Credential test (should reject)
printf "A1 LOGIN admin wrongpass\r\n" | nc 192.168.1.200 143
# nmap fingerprint check (expect "Dovecot imapd")
nmap -p 143 -sV 192.168.1.200
```

View File

@@ -0,0 +1,403 @@
# Service Realism Audit
> Live-tested against `192.168.1.200` (omega-decky, full-audit.ini).
> Every result below is from an actual `nc` or `nmap` probe, not code reading.
---
## nmap -sV Summary
```
21/tcp ftp vsftpd (before 2.0.8) or WU-FTPD ← WRONG: banner says "Twisted 25.5.0"
23/tcp telnet (unrecognized — Cowrie)
25/tcp smtp Postfix smtpd ✓
80/tcp http Apache httpd 2.4.54 ((Debian)) ✓ BUT leaks Werkzeug
110/tcp pop3 (unrecognized)
143/tcp imap (unrecognized)
389/tcp ldap Cisco LDAP server
445/tcp microsoft-ds ✓
1433/tcp ms-sql-s? (partially recognized)
1883/tcp mqtt ✓
2375/tcp docker Docker 24.0.5 ✓
3306/tcp mysql MySQL 5.7.38-log ✓
3389/tcp ms-wbt-server xrdp
5060/tcp sip SIP endpoint; Status: 401 Unauthorized ✓
5432/tcp postgresql? (partially recognized)
5900/tcp vnc VNC protocol 3.8 ✓
6379/tcp redis? (partially recognized)
6443/tcp (unrecognized) — K8s not responding at all
9200/tcp wap-wsp? (completely unrecognized — ES)
27017/tcp mongod? (partially recognized)
502/tcp CLOSED — Conpot Modbus not on this port
```
---
## Service-by-Service
---
### SMTP — port 25
**Probe:**
```
220 omega-decky ESMTP Postfix (Debian/GNU)
250-PIPELINING / SIZE / VRFY / AUTH PLAIN LOGIN / ENHANCEDSTATUSCODES / 8BITMIME / DSN
250 2.1.0 Ok ← MAIL FROM accepted
250 2.1.5 Ok ← RCPT TO accepted for any domain ✓ (open relay bait)
354 End data with... ← DATA opened
502 5.5.2 Error: command not recognized ← BUG: each message line fails
221 2.0.0 Bye
```
**Verdict:** Banner and EHLO are perfect. DATA handler is broken — server reads the socket line-by-line but the asyncio handler dispatches each line as a new command instead of buffering until `.\r\n`. The result is every line of the email body gets a 502 and the message is silently dropped.
**Fixes needed:**
- Buffer DATA state until `\r\n.\r\n` terminator
- Return `250 2.0.0 Ok: queued as <8-hex-id>` after message accepted
- Don't require AUTH for relay (open relay is the point)
- Optionally: store message content so IMAP can serve it later
---
### IMAP — port 143
**Probe:**
```
* OK [omega-decky] IMAP4rev1 Service Ready
A1 OK CAPABILITY completed
A2 NO [AUTHENTICATIONFAILED] Invalid credentials ← always, for any user/pass
A3 BAD Command not recognized ← LIST, SELECT, FETCH all unknown
```
**Verdict:** Login always fails. No mailbox commands implemented. An attacker who tries credential stuffing or default passwords (admin/admin, root/root) gets nothing and moves on. This is the biggest missed opportunity in the whole stack.
**Fixes needed:**
- Accept configurable credentials (default `admin`/`admin` or pulled from persona config)
- Implement: SELECT, LIST, FETCH, UID FETCH, SEARCH, LOGOUT
- Serve seeded fake mailboxes with bait content (see IMAP_BAIT.md)
- CAPABILITY should advertise `LITERAL+`, `SASL-IR`, `LOGIN-REFERRALS`, `ID`, `ENABLE`, `IDLE`
- Banner should hint at Dovecot: `* OK Dovecot ready.`
---
### POP3 — port 110
**Probe:**
```
+OK omega-decky POP3 server ready
+OK ← USER accepted
-ERR Authentication failed ← always
-ERR Unknown command ← STAT, LIST, RETR all unknown
```
**Verdict:** Same problem as IMAP. CAPA only returns `USER`. Should be paired with IMAP fix to serve the same fake mailbox.
**Fixes needed:**
- Accept same credentials as IMAP
- Implement: STAT, LIST, RETR, DELE, TOP, UIDL, CAPA
- CAPA should return: `TOP UIDL RESP-CODES AUTH-RESP-CODE SASL USER`
---
### HTTP — port 80
**Probe:**
```
HTTP/1.1 403 FORBIDDEN
Server: Werkzeug/3.1.8 Python/3.11.2 ← DEAD GIVEAWAY
Server: Apache/2.4.54 (Debian) ← duplicate Server header
```
**Verdict:** nmap gets the Apache fingerprint right, but any attacker who looks at response headers sees two `Server:` headers — one of which is clearly Werkzeug/Flask. The HTTP body is also a bare `<h1>403 Forbidden</h1>` with no Apache default page styling.
**Fixes needed:**
- Strip Werkzeug from Server header (set `SERVER_NAME` on the Flask app or use middleware to overwrite)
- Apache default 403 page should be the actual Apache HTML, not a bare `<h1>` tag
- Per-path routing for fake apps: `/wp-login.php`, `/wp-admin/`, `/xmlrpc.php`, etc.
- POST credential capture on login endpoints
---
### FTP — port 21
**Probe:**
```
220 Twisted 25.5.0 FTP Server ← terrible: exposes framework
331 Guest login ok...
550 Requested action not taken ← after login, nothing works
503 Incorrect sequence of commands: must send PORT or PASV before RETR
```
**Verdict:** Banner immediately identifies this as Twisted's built-in FTP server. No directory listing. PASV mode not implemented so clients hang. Real FTP honeypots should expose anonymous access with a fake directory tree containing interesting-sounding files.
**Fixes needed:**
- Override banner to: `220 (vsFTPd 3.0.3)` or similar
- Implement anonymous login (no password required)
- Implement PASV and at minimum LIST — return a fake directory with files: `backup.tar.gz`, `db_dump.sql`, `config.ini`, `credentials.txt`
- Log any RETR attempts (file name, client IP)
---
### MySQL — port 3306
**Probe:**
```
HANDSHAKE: ...5.7.38-log...
Version: 5.7.38-log
```
**Verdict:** Handshake is excellent. nmap fingerprints it perfectly. Always returns `Access denied` which is correct behavior. The only issue is the hardcoded auth plugin data bytes in the greeting — a sophisticated scanner could detect the static challenge.
**Fixes needed (low priority):**
- Randomize the 20-byte auth plugin data per connection
---
### PostgreSQL — port 5432
**Probe:**
```
R\x00\x00\x00\x0c\x00\x00\x00\x05\xde\xad\xbe\xef
```
That's `AuthenticationMD5Password` (type=5) with salt `0xdeadbeef`.
**Verdict:** Correct protocol response. Salt is hardcoded and static — `deadbeef` is trivially identifiable as fake.
**Fixes needed (low priority):**
- Randomize the 4-byte MD5 salt per connection
---
### MSSQL — port 1433
**Probe:** No response to standard TDS pre-login packets. Server drops connection immediately.
**Verdict:** Broken. TDS pre-login handler is likely mismatching the packet format we sent.
**Fixes needed:**
- Debug TDS pre-login response — currently silent
- Verify the hardcoded TDS response bytes are valid
---
### Redis — port 6379
**Probe:**
```
+OK ← AUTH accepted (any password!)
$150
redis_version:7.2.7 / os:Linux 5.15.0 / uptime_in_seconds:864000 ...
*0 ← KEYS * returns empty
```
**Verdict:** Accepts any AUTH password (intentional for bait). INFO looks real. But `KEYS *` returns nothing — a real Redis exposed to the internet always has data. An attacker who gets `+OK` on AUTH will immediately run `KEYS *` or `SCAN 0` and leave when they find nothing.
**Fixes needed:**
- Add fake key-value store: session tokens, JWT secrets, cached user objects, API keys
- `KEYS *``["sessions:user:1234", "cache:api_key", "jwt:secret", "user:admin"]`
- `GET sessions:user:1234` → JSON user object with credentials
- `GET jwt:secret` → a plausible JWT signing key
---
### MongoDB — port 27017
**Probe:** No response to OP_MSG `isMaster` command.
**Verdict:** Broken or rejecting the wire protocol format we sent.
**Fixes needed:**
- Debug the OP_MSG/OP_QUERY handler
---
### Elasticsearch — port 9200
**Probe:**
```json
{"name":"omega-decky","cluster_uuid":"xC3Pr9abTq2mNkOeLvXwYA","version":{"number":"7.17.9",...}}
/_cat/indices [] empty: dead giveaway
```
**Verdict:** Root response is convincing. But `/_cat/indices` returns an empty array — a real exposed ES instance has indices. nmap doesn't recognize port 9200 as Elasticsearch at all ("wap-wsp?").
**Fixes needed:**
- Add fake indices: `logs-2024.01`, `users`, `products`, `audit_trail`
- `/_cat/indices` → return rows with doc counts, sizes
- `/_search` on those indices → return sample documents (bait data: user records, API tokens)
---
### Docker API — port 2375
**Probe:**
```json
/version {Version: "24.0.5", ApiVersion: "1.43", GoVersion: "go1.20.6", ...}
/containers/json [{"Id":"a1b2c3d4e5f6","Names":["/webapp"],"Image":"nginx:latest",...}]
```
**Verdict:** Version response is perfect. Container list is minimal (one hardcoded container). No `/images/json` data, no exec endpoint. An attacker will immediately try `POST /containers/webapp/exec` to get RCE.
**Fixes needed:**
- Add 3-5 containers with realistic names/images: `db` (postgres:14), `api` (node:18-alpine), `redis` (redis:7)
- Add `/images/json` with corresponding images
- Add exec endpoint that captures the command and returns `{"Id":"<random>"}` then a fake stream
---
### SMB — port 445
**Probe:** SMB1 negotiate response received (standard `\xff\x53\x4d\x42r` header).
**Verdict:** Impacket SimpleSMBServer responds. nmap IDs it as `microsoft-ds`. Functional enough for credential capture.
---
### VNC — port 5900
**Probe:**
```
RFB 003.008 ✓
```
**Verdict:** Correct RFB 3.8 handshake. nmap fingerprints it as VNC protocol 3.8. The 16-byte DES challenge is hardcoded — same bytes every time.
**Fixes needed (trivial):**
- Randomize the 16-byte challenge per connection (`os.urandom(16)`)
---
### RDP — port 3389
**Probe:**
```
0300000b06d00000000000 ← X.224 Connection Confirm
(connection closed)
```
**Verdict:** nmap identifies it as "xrdp" which is correct enough. The X.224 CC is fine. But the server closes immediately after — no NLA/CredSSP negotiation, no credential capture. This is the single biggest missed opportunity for credential harvesting after SSH.
**Fixes needed:**
- Implement NTLM Type-1/Type-2/Type-3 exchange to capture NTLMv2 hashes
- Alternatively: send a fake TLS certificate then disconnect (many scanners fingerprint by the cert)
---
### SIP — port 5060
**Probe:**
```
SIP/2.0 401 Unauthorized
WWW-Authenticate: Digest realm="omega-decky", nonce="decnet0000", algorithm=MD5
```
**Verdict:** Functional. Correctly challenges with 401. But `nonce="decnet0000"` is a hardcoded string — a Shodan signature would immediately pick this up.
**Fixes needed (low effort):**
- Generate a random hex nonce per connection
---
### MQTT — port 1883
**Probe:** `CONNACK` with return code `0x05` (not authorized).
**Verdict:** Rejects all connections. For an ICS/water-plant persona, this should accept connections and expose fake sensor topics. See `ICS_SCADA.md`.
**Fixes needed:**
- Return CONNACK 0x00 (accepted)
- Implement SUBSCRIBE: return retained sensor readings for bait topics
- Implement PUBLISH: log any published commands (attacker trying to control plant)
---
### SNMP — port 161/UDP
Not directly testable without sudo for raw UDP send, but code review shows BER encoding is correct.
**Verdict:** Functional. sysDescr is a generic Linux string — should be tuned per archetype.
---
### LDAP — port 389
**Probe:** BER response received (code 49 = invalidCredentials).
**Verdict:** Correct protocol. nmap IDs it as "Cisco LDAP server" which is fine. No rootDSE response for unauthenticated enumeration.
---
### Telnet — port 23 (Cowrie)
**Probe:**
```
login: <IAC WILL ECHO>
Password:
Login incorrect ← for all tried credentials
```
**Verdict:** Cowrie is running but rejecting everything. Default Cowrie credentials (root/1234, admin/admin, etc.) should work. May be a config issue with the decky hostname or user database.
---
### Conpot — port 502
**Verdict:** Not responding on port 502 (Modbus TCP). Conpot may use a different internal port that gets NAT'd, or it's not configured for Modbus. Needs investigation.
---
## Bug Ledger
| # | Service | Bug | Severity |
|---|------------|-------------------------------------------|----------|
| 1 | SMTP | DATA handler returns 502 for every line | Critical |
| 2 | HTTP | Werkzeug in Server header + bare 403 body | High |
| 3 | FTP | "Twisted 25.5.0" in banner | High |
| 4 | MSSQL | No response to TDS pre-login | High |
| 5 | MongoDB | No response to OP_MSG isMaster | High |
| 6 | K8s | Not responding (TLS setup?) | Medium |
| 7 | IMAP/POP3 | Always rejects, no mailbox ops | Critical (feature gap) |
| 8 | Redis | Empty keyspace after AUTH success | Medium |
| 9 | SIP/VNC | Hardcoded nonce/challenge | Low |
| 10| MQTT | Rejects all connections | High (ICS feature gap) |
| 11| Conpot | No Modbus response | Medium |
| 12| PostgreSQL | Hardcoded salt `deadbeef` | Low |
---
## Related Plans
- [`SMTP_RELAY.md`](SMTP_RELAY.md) — Fix DATA handler, implement open relay persona
- [`IMAP_BAIT.md`](IMAP_BAIT.md) — Auth + seeded mailboxes + POP3 parity
- [`ICS_SCADA.md`](ICS_SCADA.md) — MQTT water plant, SNMP tuning, Conpot
- [`BUG_FIXES.md`](BUG_FIXES.md) — HTTP header leak, FTP banner, MSSQL, MongoDB, Redis keys
---
## Progress Updates
### [2026-04-10] ICS/SCADA & IMAP Bait Completion
The following infrastructure gaps from the Bug Ledger have been successfully resolved:
* **#7 (IMAP/POP3):** Both services now implement full protocol state machines (authentication, selection/transactions, fetching) and serve realistic hardcoded bait payloads (AWS keys, DB passwords).
* **#10 (MQTT):** The service now issues successful `CONNACK` responses, presents interactive persona-driven topic trees, and logs attacker `PUBLISH` events.
* **#11 (Conpot):** Wrapped in a custom build context that correctly binds Modbus to port `502` using a temporary template overwrite, resolving the missing Modbus response issue.
---
## Implementation Plan
### Phase 3: Critical SMTP Data Handling (P0)
- **SMTP (`SMTP_RELAY.md`)**: Rewrite `templates/smtp/server.py` to buffer `DATA` blocks properly and respond to `DATA` termination with a legitimate `250 OK` queue ID. Accept all open relay behavior inherently without mandating `AUTH`.
### Phase 4: High-Severity Protocol Fingerprint Fixes (P1)
- **HTTP**: Hijack Flask `after_request` to enforce the Apache `Server` header in `templates/http/server.py`. Rewrite the 403 response body with authentic Apache HTML.
- **FTP**: Update `templates/ftp/server.py` to overwrite Twisted FTP greeting banner to `vsFTPd`. Implement `FTPAnonymousShell` to serve fake files (tarball, db dump, credentials).
- **MSSQL**: Update `templates/mssql/server.py` to emit a valid length-fixed TDS 7.x pre-login payload to successfully pass the nmap probe.
- **MongoDB**: Update `templates/mongodb/server.py` to respond to the `OP_MSG isMaster` requests generated by modern `nmap` and MongoDB clients.
### Phase 5: State & Realism Improvements (P2)
- **Redis**: Instantiate `_FAKE_STORE` dict with bait authentication tokens and JWT salts in `templates/redis/server.py` to return plausible data for `KEYS *`, `GET`, `SCAN`, etc.
- **Dynamic Nonces (SIP/VNC/Postgres)**: Use `os.urandom()` and `secrets` to dynamically generate salts/nonces per connection instead of hardcoded strings in `templates/postgres/server.py`, `templates/sip/server.py`, and `templates/vnc/server.py`.
- **K8s (Kubernetes API)**: Investigate TLS setup block for K8s API port `6443` dropping traffic, pending an actual solution (requires deeper analysis and likely a separate plan).

195
development/SMTP_RELAY.md Normal file
View File

@@ -0,0 +1,195 @@
# SMTP Open Relay — Plan
> Priority: **P0** — DATA handler is broken (502 on every body line).
> Scenario: attacker finds open relay, sends mail through it.
---
## What's broken today
`templates/smtp/server.py` sends `354 End data with <CR><LF>.<CR><LF>` on `DATA`, then
falls back to `_handle_line()` for every subsequent line. Because those lines don't start
with a recognized SMTP verb, every line gets:
```
502 5.5.2 Error: command not recognized
```
The session never completes. The message is silently dropped.
---
## Fix: DATA state machine
Add a `_in_data` flag. Once `DATA` is received, accumulate raw body lines until the
terminator `\r\n.\r\n`. On terminator: log the message, return `250`, flip flag back.
### State variables added to `SMTPProtocol.__init__`
```python
self._in_data = False
self._data_buf = [] # accumulate body lines
self._mail_from = ""
self._rcpt_to = []
```
### Modified `data_received`
No change — still splits on `\r\n`.
### Modified `_handle_line`
```python
def _handle_line(self, line: str) -> None:
# DATA body accumulation mode
if self._in_data:
if line == ".":
# end of message
body = "\r\n".join(self._data_buf)
msg_id = _rand_msg_id()
_log("message_accepted",
src=self._peer[0],
mail_from=self._mail_from,
rcpt_to=",".join(self._rcpt_to),
body_bytes=len(body),
msg_id=msg_id)
self._transport.write(f"250 2.0.0 Ok: queued as {msg_id}\r\n".encode())
self._in_data = False
self._data_buf = []
else:
# RFC 5321 dot-stuffing: leading dot means literal dot, strip it
self._data_buf.append(line[1:] if line.startswith("..") else line)
return
cmd = line.split()[0].upper() if line.split() else ""
# ... existing handlers ...
elif cmd == "MAIL":
self._mail_from = line.split(":", 1)[1].strip() if ":" in line else line
_log("mail_from", src=self._peer[0], value=self._mail_from)
self._transport.write(b"250 2.0.0 Ok\r\n")
elif cmd == "RCPT":
rcpt = line.split(":", 1)[1].strip() if ":" in line else line
self._rcpt_to.append(rcpt)
_log("rcpt_to", src=self._peer[0], value=rcpt)
self._transport.write(b"250 2.1.5 Ok\r\n")
elif cmd == "DATA":
if not self._mail_from or not self._rcpt_to:
self._transport.write(b"503 5.5.1 Error: need MAIL command\r\n")
else:
self._in_data = True
self._transport.write(b"354 End data with <CR><LF>.<CR><LF>\r\n")
elif cmd == "RSET":
self._mail_from = ""
self._rcpt_to = []
self._in_data = False
self._data_buf = []
self._transport.write(b"250 2.0.0 Ok\r\n")
```
### Helper
```python
import random, string
def _rand_msg_id() -> str:
"""Return a Postfix-style 12-char hex queue ID."""
return "".join(random.choices("0123456789ABCDEF", k=12))
```
---
## Open relay behavior
The current server already returns `250 2.1.5 Ok` for any `RCPT TO` regardless of domain.
That's correct — do NOT gate on the domain. The attacker's goal is to relay spam. We let
them "succeed" and log everything.
Remove the `AUTH` rejection + close. An open relay doesn't require authentication. Replace:
```python
elif cmd == "AUTH":
_log("auth_attempt", src=self._peer[0], command=line)
self._transport.write(b"535 5.7.8 Error: authentication failed: ...\r\n")
self._transport.close()
```
With:
```python
elif cmd == "AUTH":
# Log the attempt but advertise that auth succeeds (open relay bait)
_log("auth_attempt", src=self._peer[0], command=line)
self._transport.write(b"235 2.7.0 Authentication successful\r\n")
```
Some scanners probe AUTH before DATA. Accepting it keeps them engaged.
---
## Banner / persona
Current banner is already perfect: `220 omega-decky ESMTP Postfix (Debian/GNU)`.
The `SMTP_BANNER` env var lets per-decky customization happen at deploy time via the
persona config — no code change needed.
---
## Log events emitted
| event_type | Fields |
|------------------|---------------------------------------------------|
| `connect` | src, src_port |
| `ehlo` | src, domain |
| `auth_attempt` | src, command |
| `mail_from` | src, value |
| `rcpt_to` | src, value (one event per recipient) |
| `message_accepted` | src, mail_from, rcpt_to, body_bytes, msg_id |
| `disconnect` | src |
---
## Files to change
| File | Change |
|------|--------|
| `templates/smtp/server.py` | DATA state machine, open relay AUTH accept, RSET fix |
| `tests/test_smtp.py` | New: DATA → 250 flow, multi-recipient, dot-stuffing, RSET |
---
## Test cases (pytest)
```python
# full send flow
conn EHLO MAIL FROM RCPT TO DATA body lines "." 250 2.0.0 Ok: queued as ...
# multi-recipient
RCPT TO x3 DATA body "." 250
# dot-stuffing
..real dot body line stored as ".real dot"
# RSET mid-session
MAIL FROM RCPT TO RSET assert _mail_from == "" and _rcpt_to == []
# AUTH accept
AUTH PLAIN base64 235
# 503 if DATA before MAIL
DATA (no prior MAIL) 503
```
---
## Verification against live decky
```bash
# Full relay test
printf "EHLO test.com\r\nMAIL FROM:<hacker@evil.com>\r\nRCPT TO:<admin@target.com>\r\nDATA\r\nSubject: hello\r\n\r\nBody line 1\r\nBody line 2\r\n.\r\nQUIT\r\n" | nc 192.168.1.200 25
# Expected final lines:
# 354 End data with ...
# 250 2.0.0 Ok: queued as <ID>
# 221 2.0.0 Bye
```

View File

@@ -0,0 +1,476 @@
Nmap scan report for 192.168.1.200
Host is up (0.0000020s latency).
Not shown: 65515 closed tcp ports (reset)
PORT STATE SERVICE VERSION
21/tcp open ftp vsftpd (before 2.0.8) or WU-FTPD
23/tcp open telnet?
| fingerprint-strings:
| DNSStatusRequestTCP, DNSVersionBindReqTCP, DistCCD, JavaRMI, LANDesk-RC, LDAPBindReq, NULL, NotesRPC, RPCCheck, Radmin, TerminalServer, WMSRequest, X11Probe, mydoom, tn3270:
| login:
| FourOhFourRequest, GenericLines, GetRequest, HTTPOptions, LDAPSearchReq, RTSPRequest:
| login:
| Password:
| Login incorrect
| login:
| Hello, Help, Kerberos, LPDString, NessusTPv10, NessusTPv11, NessusTPv12, SSLSessionReq, SSLv23SessionReq, TerminalServerCookie, Verifier, VerifierAdvanced, WWWOFFLEctrlstat:
| login:
| Password:
| SIPOptions:
| login:
| Password:
| Login incorrect
| login: Password:
| Login incorrect
| login: Password:
| Login incorrect
| login: Password:
| Login incorrect
| login: Password:
| Login incorrect
|_ login: Password:
25/tcp open smtp Postfix smtpd
|_smtp-commands: omega-decky, PIPELINING, SIZE 10240000, VRFY, ETRN, AUTH PLAIN LOGIN, ENHANCEDSTATUSCODES, 8BITMIME, DSN
80/tcp open http Apache httpd 2.4.54
|_http-title: 403 Forbidden
|_http-server-header: Werkzeug/3.1.8 Python/3.11.2
110/tcp open pop3 Dovecot pop3d ([omega-decky])
|_pop3-capabilities: USER
143/tcp open imap Dovecot imapd
|_imap-capabilities: IMAP4rev1 AUTH=PLAIN OK completed AUTH=LOGINA0001 CAPABILITY
389/tcp open ldap Cisco LDAP server
445/tcp open microsoft-ds
| fingerprint-strings:
| SMBProgNeg:
| SMBr
|_ "3DUfw
1433/tcp open ms-sql-s?
1883/tcp open mqtt
| mqtt-subscribe:
| Topics and their most recent payloads:
| plant/water/pump2/status: STANDBY
| plant/alarm/high_pressure: 0
| plant/water/chlorine/residual: 0.8
| plant/water/chlorine/dosing: 1.2
| plant/water/pump1/rpm: 1419
| plant/water/tank1/level: 76.6
| plant/$SYS/broker/uptime: 2847392
| plant/$SYS/broker/version: Mosquitto 2.0.15
| plant/water/valve/inlet/state: OPEN
| plant/water/valve/drain/state: CLOSED
| plant/water/tank1/pressure: 2.86
| plant/water/pump1/status: RUNNING
| plant/alarm/low_chlorine: 0
|_ plant/alarm/pump_fault: 0
2375/tcp open docker Docker 24.0.5
| fingerprint-strings:
| GetRequest:
| HTTP/1.1 404 NOT FOUND
| Server: Werkzeug/3.1.8 Python/3.11.2
| Date: Fri, 10 Apr 2026 06:25:23 GMT
| Content-Type: application/json
| Content-Length: 46
| Connection: close
| {"message": "page not found", "response": 404}
| HTTPOptions:
| HTTP/1.1 200 OK
| Server: Werkzeug/3.1.8 Python/3.11.2
| Date: Fri, 10 Apr 2026 06:25:23 GMT
| Content-Type: text/html; charset=utf-8
| Allow: HEAD, OPTIONS, GET
| Content-Length: 0
| Connection: close
| Hello:
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 400</p>
| <p>Message: Bad request syntax ('EHLO').</p>
| <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
| </body>
| </html>
| RTSPRequest:
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 400</p>
| <p>Message: Bad request version ('RTSP/1.0').</p>
| <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
| </body>
| </html>
| docker:
| HTTP/1.1 200 OK
| Server: Werkzeug/3.1.8 Python/3.11.2
| Date: Fri, 10 Apr 2026 06:25:23 GMT
| Content-Type: application/json
| Content-Length: 187
| Connection: close
|_ {"Version": "24.0.5", "ApiVersion": "1.43", "MinAPIVersion": "1.12", "GitCommit": "ced0996", "GoVersion": "go1.20.6", "Os": "linux", "Arch": "amd64", "KernelVersion": "5.15.0-76-generic"}
| docker-version:
| KernelVersion: 5.15.0-76-generic
| MinAPIVersion: 1.12
| Arch: amd64
| Os: linux
| GoVersion: go1.20.6
| Version: 24.0.5
| GitCommit: ced0996
|_ ApiVersion: 1.43
3306/tcp open mysql MySQL 5.7.38-log
| mysql-info:
| Protocol: 10
| Version: 5.7.38-log
| Thread ID: 1
| Capabilities flags: 63487
| Some Capabilities: Support41Auth, DontAllowDatabaseTableColumn, Speaks41ProtocolOld, ConnectWithDatabase, SupportsTransactions, IgnoreSpaceBeforeParenthesis, SupportsCompression, LongColumnFlag, SupportsLoadDataLocal, ODBCClient, LongPassword, Speaks41ProtocolNew, InteractiveClient, FoundRows, IgnoreSigpipes, SupportsMultipleStatments, SupportsMultipleResults, SupportsAuthPlugins
| Status: Autocommit
| Salt: pv!magic!O}%>UM|gu^1
|_ Auth Plugin Name: mysql_native_password
3389/tcp open ms-wbt-server xrdp
5060/tcp open sip (SIP end point; Status: 401 Unauthorized)
| fingerprint-strings:
| HTTPOptions:
| SIP/2.0 401 Unauthorized
| Via:
| From:
| Call-ID:
| CSeq:
| WWW-Authenticate: Digest realm="omega-decky", nonce="fa63b9f8e719d810", algorithm=MD5
| Content-Length: 0
| RTSPRequest:
| SIP/2.0 401 Unauthorized
| Via:
| From:
| Call-ID:
| CSeq:
| WWW-Authenticate: Digest realm="omega-decky", nonce="25b193b6f8c63e9d", algorithm=MD5
| Content-Length: 0
| SIPOptions:
| SIP/2.0 401 Unauthorized
| Via: SIP/2.0/TCP nm;branch=foo
| From: <sip:nm@nm>;tag=root
| <sip:nm2@nm2>
| Call-ID: 50000
| CSeq: 42 OPTIONS
| WWW-Authenticate: Digest realm="omega-decky", nonce="7d2aa09cb9bfbac0", algorithm=MD5
|_ Content-Length: 0
5432/tcp open postgresql?
5900/tcp open vnc VNC (protocol 3.8)
| vnc-info:
| Protocol version: 3.8
| Security types:
|_ VNC Authentication (2)
6379/tcp open redis?
| fingerprint-strings:
| HELP4STOMP, HTTPOptions, Hello, Help, Kerberos, LPDString, Memcache, NessusTPv10, NessusTPv11, NessusTPv12, RTSPRequest, SSLSessionReq, SSLv23SessionReq, Socks5, SqueezeCenter_CLI, TLSSessionReq, TerminalServerCookie, Verifier, VerifierAdvanced, WWWOFFLEctrlstat, ajp, dominoconsole, firebird:
| -ERR unknown command
| LDAPSearchReq, hp-pjl, pervasive-btrieve:
| -ERR unknown command
| -ERR unknown command
| SIPOptions:
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| redis-server:
| $150
| Server
| redis_version:7.2.7
| redis_mode:standalone
| os:Linux 5.15.0
| arch_bits:64
| tcp_port:6379
| uptime_in_seconds:864000
| connected_clients:1
|_ Keyspace
6443/tcp open sun-sr-https?
| fingerprint-strings:
| GetRequest:
| HTTP/1.1 404 NOT FOUND
| Server: Werkzeug/3.1.8 Python/3.11.2
| Date: Fri, 10 Apr 2026 06:25:23 GMT
| Content-Type: application/json
| Content-Length: 52
| Connection: close
| {"kind": "Status", "status": "Failure", "code": 404}
| HTTPOptions:
| HTTP/1.1 200 OK
| Server: Werkzeug/3.1.8 Python/3.11.2
| Date: Fri, 10 Apr 2026 06:25:23 GMT
| Content-Type: text/html; charset=utf-8
| Allow: GET, HEAD, OPTIONS
| Content-Length: 0
| Connection: close
| RTSPRequest:
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 400</p>
| <p>Message: Bad request version ('RTSP/1.0').</p>
| <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
| </body>
| </html>
| SSLSessionReq:
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 400</p>
| <p>Message: Bad request syntax ('
| &lt;=
| ').</p>
| <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
| </body>
|_ </html>
9200/tcp open wap-wsp?
| fingerprint-strings:
| GetRequest:
| HTTP/1.0 200 OK
| Server: elasticsearch
| Date: Fri, 10 Apr 2026 06:25:23 GMT
| Content-Type: application/json; charset=UTF-8
| Content-Length: 477
| X-elastic-product: Elasticsearch
| {"name": "omega-decky", "cluster_name": "elasticsearch", "cluster_uuid": "xC3Pr9abTq2mNkOeLvXwYA", "version": {"number": "7.17.9", "build_flavor": "default", "build_type": "docker", "build_hash": "ef48222227ee6b9e70e502f0f0daa52435ee634d", "build_date": "2023-01-31T05:34:43.305517834Z", "build_snapshot": false, "lucene_version": "8.11.1", "minimum_wire_compatibility_version": "6.8.0", "minimum_index_compatibility_version": "6.0.0-beta1"}, "tagline": "You Know, for Search"}
| HTTPOptions:
| HTTP/1.0 501 Unsupported method ('OPTIONS')
| Server: elasticsearch
| Date: Fri, 10 Apr 2026 06:25:23 GMT
| Connection: close
| Content-Type: text/html;charset=utf-8
| Content-Length: 360
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 501</p>
| <p>Message: Unsupported method ('OPTIONS').</p>
| <p>Error code explanation: 501 - Server does not support this operation.</p>
| </body>
| </html>
| RTSPRequest:
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 400</p>
| <p>Message: Bad request version ('RTSP/1.0').</p>
| <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
| </body>
|_ </html>
27017/tcp open mongod?
|_mongodb-databases: ERROR: Script execution failed (use -d to debug)
|_mongodb-info: ERROR: Script execution failed (use -d to debug)
8 services unrecognized despite returning data. If you know the service/version, please submit the following fingerprints at https://nmap.org/cgi-bin/submit.cgi?new-service :
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port23-TCP:V=7.92%I=9%D=4/10%Time=69D897D3%P=x86_64-redhat-linux-gnu%r(
SF:NULL,7,"login:\x20")%r(GenericLines,2C,"login:\x20\xff\xfb\x01Password:
SF:\x20\nLogin\x20incorrect\nlogin:\x20")%r(tn3270,16,"login:\x20\xff\xfe\
SF:x18\xff\xfe\x19\xff\xfc\x19\xff\xfe\0\xff\xfc\0")%r(GetRequest,2C,"logi
SF:n:\x20\xff\xfb\x01Password:\x20\nLogin\x20incorrect\nlogin:\x20")%r(HTT
SF:POptions,2C,"login:\x20\xff\xfb\x01Password:\x20\nLogin\x20incorrect\nl
SF:ogin:\x20")%r(RTSPRequest,2C,"login:\x20\xff\xfb\x01Password:\x20\nLogi
SF:n\x20incorrect\nlogin:\x20")%r(RPCCheck,7,"login:\x20")%r(DNSVersionBin
SF:dReqTCP,7,"login:\x20")%r(DNSStatusRequestTCP,7,"login:\x20")%r(Hello,1
SF:4,"login:\x20\xff\xfb\x01Password:\x20")%r(Help,14,"login:\x20\xff\xfb\
SF:x01Password:\x20")%r(SSLSessionReq,14,"login:\x20\xff\xfb\x01Password:\
SF:x20")%r(TerminalServerCookie,14,"login:\x20\xff\xfb\x01Password:\x20")%
SF:r(SSLv23SessionReq,14,"login:\x20\xff\xfb\x01Password:\x20")%r(Kerberos
SF:,14,"login:\x20\xff\xfb\x01Password:\x20")%r(X11Probe,7,"login:\x20")%r
SF:(FourOhFourRequest,2C,"login:\x20\xff\xfb\x01Password:\x20\nLogin\x20in
SF:correct\nlogin:\x20")%r(LPDString,14,"login:\x20\xff\xfb\x01Password:\x
SF:20")%r(LDAPSearchReq,2C,"login:\x20\xff\xfb\x01Password:\x20\nLogin\x20
SF:incorrect\nlogin:\x20")%r(LDAPBindReq,7,"login:\x20")%r(SIPOptions,BE,"
SF:login:\x20\xff\xfb\x01Password:\x20\nLogin\x20incorrect\nlogin:\x20Pass
SF:word:\x20\nLogin\x20incorrect\nlogin:\x20Password:\x20\nLogin\x20incorr
SF:ect\nlogin:\x20Password:\x20\nLogin\x20incorrect\nlogin:\x20Password:\x
SF:20\nLogin\x20incorrect\nlogin:\x20Password:\x20")%r(LANDesk-RC,7,"login
SF::\x20")%r(TerminalServer,7,"login:\x20")%r(NotesRPC,7,"login:\x20")%r(D
SF:istCCD,7,"login:\x20")%r(JavaRMI,7,"login:\x20")%r(Radmin,7,"login:\x20
SF:")%r(NessusTPv12,14,"login:\x20\xff\xfb\x01Password:\x20")%r(NessusTPv1
SF:1,14,"login:\x20\xff\xfb\x01Password:\x20")%r(NessusTPv10,14,"login:\x2
SF:0\xff\xfb\x01Password:\x20")%r(WMSRequest,7,"login:\x20")%r(mydoom,7,"l
SF:ogin:\x20")%r(WWWOFFLEctrlstat,14,"login:\x20\xff\xfb\x01Password:\x20"
SF:)%r(Verifier,14,"login:\x20\xff\xfb\x01Password:\x20")%r(VerifierAdvanc
SF:ed,14,"login:\x20\xff\xfb\x01Password:\x20");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port445-TCP:V=7.92%I=9%D=4/10%Time=69D897D8%P=x86_64-redhat-linux-gnu%r
SF:(SMBProgNeg,51,"\0\0\0M\xffSMBr\0\0\0\0\x80\0\xc0\0\0\0\0\0\0\0\0\0\0\0
SF:\0\0\0@\x06\0\0\x01\0\x11\x07\0\x03\x01\0\x01\0\0\xfa\0\0\0\0\x01\0\0\0
SF:\0\0p\0\0\0\0\0\0\0\0\0\0\0\0\0\x08\x08\0\x11\"3DUfw\x88");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port1433-TCP:V=7.92%I=9%D=4/10%Time=69D897D8%P=x86_64-redhat-linux-gnu%
SF:r(ms-sql-s,2F,"\x04\x01\0/\0\0\x01\0\0\0\x1a\0\x06\x01\0\x20\0\x01\x02\
SF:0!\0\x01\x03\0\"\0\x04\x04\0&\0\x01\xff\x0e\0\x07\xd0\0\0\x02\0\0\0\0\0
SF:\0");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port5060-TCP:V=7.92%I=9%D=4/10%Time=69D897E0%P=x86_64-redhat-linux-gnu%
SF:r(SIPOptions,F7,"SIP/2\.0\x20401\x20Unauthorized\r\nVia:\x20SIP/2\.0/TC
SF:P\x20nm;branch=foo\r\nFrom:\x20<sip:nm@nm>;tag=root\r\nTo:\x20<sip:nm2@
SF:nm2>\r\nCall-ID:\x2050000\r\nCSeq:\x2042\x20OPTIONS\r\nWWW-Authenticate
SF::\x20Digest\x20realm=\"omega-decky\",\x20nonce=\"7d2aa09cb9bfbac0\",\x2
SF:0algorithm=MD5\r\nContent-Length:\x200\r\n\r\n")%r(HTTPOptions,AE,"SIP/
SF:2\.0\x20401\x20Unauthorized\r\nVia:\x20\r\nFrom:\x20\r\nTo:\x20\r\nCall
SF:-ID:\x20\r\nCSeq:\x20\r\nWWW-Authenticate:\x20Digest\x20realm=\"omega-d
SF:ecky\",\x20nonce=\"fa63b9f8e719d810\",\x20algorithm=MD5\r\nContent-Leng
SF:th:\x200\r\n\r\n")%r(RTSPRequest,AE,"SIP/2\.0\x20401\x20Unauthorized\r\
SF:nVia:\x20\r\nFrom:\x20\r\nTo:\x20\r\nCall-ID:\x20\r\nCSeq:\x20\r\nWWW-A
SF:uthenticate:\x20Digest\x20realm=\"omega-decky\",\x20nonce=\"25b193b6f8c
SF:63e9d\",\x20algorithm=MD5\r\nContent-Length:\x200\r\n\r\n");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port5432-TCP:V=7.92%I=9%D=4/10%Time=69D897E2%P=x86_64-redhat-linux-gnu%
SF:r(SMBProgNeg,D,"R\0\0\0\x0c\0\0\0\x05\x96\xbci&")%r(Kerberos,D,"R\0\0\0
SF:\x0c\0\0\0\x05\xa7\x87:~")%r(ZendJavaBridge,D,"R\0\0\0\x0c\0\0\0\x05\xe
SF:d\x9f\xf8\0");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port6379-TCP:V=7.92%I=9%D=4/10%Time=69D897D8%P=x86_64-redhat-linux-gnu%
SF:r(redis-server,9E,"\$150\r\n#\x20Server\nredis_version:7\.2\.7\nredis_m
SF:ode:standalone\nos:Linux\x205\.15\.0\narch_bits:64\ntcp_port:6379\nupti
SF:me_in_seconds:864000\nconnected_clients:1\n#\x20Keyspace\n\r\n")%r(GetR
SF:equest,5,"\$-1\r\n")%r(HTTPOptions,16,"-ERR\x20unknown\x20command\r\n")
SF:%r(RTSPRequest,16,"-ERR\x20unknown\x20command\r\n")%r(Hello,16,"-ERR\x2
SF:0unknown\x20command\r\n")%r(Help,16,"-ERR\x20unknown\x20command\r\n")%r
SF:(SSLSessionReq,16,"-ERR\x20unknown\x20command\r\n")%r(TerminalServerCoo
SF:kie,16,"-ERR\x20unknown\x20command\r\n")%r(TLSSessionReq,16,"-ERR\x20un
SF:known\x20command\r\n")%r(SSLv23SessionReq,16,"-ERR\x20unknown\x20comman
SF:d\r\n")%r(Kerberos,16,"-ERR\x20unknown\x20command\r\n")%r(FourOhFourReq
SF:uest,5,"\$-1\r\n")%r(LPDString,16,"-ERR\x20unknown\x20command\r\n")%r(L
SF:DAPSearchReq,2C,"-ERR\x20unknown\x20command\r\n-ERR\x20unknown\x20comma
SF:nd\r\n")%r(SIPOptions,DC,"-ERR\x20unknown\x20command\r\n-ERR\x20unknown
SF:\x20command\r\n-ERR\x20unknown\x20command\r\n-ERR\x20unknown\x20command
SF:\r\n-ERR\x20unknown\x20command\r\n-ERR\x20unknown\x20command\r\n-ERR\x2
SF:0unknown\x20command\r\n-ERR\x20unknown\x20command\r\n-ERR\x20unknown\x2
SF:0command\r\n-ERR\x20unknown\x20command\r\n")%r(NessusTPv12,16,"-ERR\x20
SF:unknown\x20command\r\n")%r(NessusTPv11,16,"-ERR\x20unknown\x20command\r
SF:\n")%r(NessusTPv10,16,"-ERR\x20unknown\x20command\r\n")%r(WWWOFFLEctrls
SF:tat,16,"-ERR\x20unknown\x20command\r\n")%r(Verifier,16,"-ERR\x20unknown
SF:\x20command\r\n")%r(VerifierAdvanced,16,"-ERR\x20unknown\x20command\r\n
SF:")%r(Socks5,16,"-ERR\x20unknown\x20command\r\n")%r(OfficeScan,5,"\$-1\r
SF:\n")%r(HELP4STOMP,16,"-ERR\x20unknown\x20command\r\n")%r(Memcache,16,"-
SF:ERR\x20unknown\x20command\r\n")%r(firebird,16,"-ERR\x20unknown\x20comma
SF:nd\r\n")%r(pervasive-btrieve,2C,"-ERR\x20unknown\x20command\r\n-ERR\x20
SF:unknown\x20command\r\n")%r(ajp,16,"-ERR\x20unknown\x20command\r\n")%r(h
SF:p-pjl,2C,"-ERR\x20unknown\x20command\r\n-ERR\x20unknown\x20command\r\n"
SF:)%r(SqueezeCenter_CLI,16,"-ERR\x20unknown\x20command\r\n")%r(dominocons
SF:ole,16,"-ERR\x20unknown\x20command\r\n");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port6443-TCP:V=7.92%I=9%D=4/10%Time=69D897D3%P=x86_64-redhat-linux-gnu%
SF:r(SSLSessionReq,1E8,"<!DOCTYPE\x20HTML>\n<html\x20lang=\"en\">\n\x20\x2
SF:0\x20\x20<head>\n\x20\x20\x20\x20\x20\x20\x20\x20<meta\x20charset=\"utf
SF:-8\">\n\x20\x20\x20\x20\x20\x20\x20\x20<title>Error\x20response</title>
SF:\n\x20\x20\x20\x20</head>\n\x20\x20\x20\x20<body>\n\x20\x20\x20\x20\x20
SF:\x20\x20\x20<h1>Error\x20response</h1>\n\x20\x20\x20\x20\x20\x20\x20\x2
SF:0<p>Error\x20code:\x20400</p>\n\x20\x20\x20\x20\x20\x20\x20\x20<p>Messa
SF:ge:\x20Bad\x20request\x20syntax\x20\('\\x16\\x03\\x00\\x00S\\x01\\x00\\
SF:x00O\\x03\\x00\?G\xc3\x97\xc3\xb7\xc2\xba,\xc3\xae\xc3\xaa\xc2\xb2`~\xc
SF:3\xb3\\x00\xc3\xbd\\x82{\xc2\xb9\xc3\x95\\x96\xc3\x88w\\x9b\xc3\xa6\xc3
SF:\x84\xc3\x9b&lt;=\xc3\x9bo\xc3\xaf\\x10n\\x00\\x00\(\\x00\\x16\\x00\\x1
SF:3\\x00'\)\.</p>\n\x20\x20\x20\x20\x20\x20\x20\x20<p>Error\x20code\x20ex
SF:planation:\x20400\x20-\x20Bad\x20request\x20syntax\x20or\x20unsupported
SF:\x20method\.</p>\n\x20\x20\x20\x20</body>\n</html>\n")%r(GetRequest,E0,
SF:"HTTP/1\.1\x20404\x20NOT\x20FOUND\r\nServer:\x20Werkzeug/3\.1\.8\x20Pyt
SF:hon/3\.11\.2\r\nDate:\x20Fri,\x2010\x20Apr\x202026\x2006:25:23\x20GMT\r
SF:\nContent-Type:\x20application/json\r\nContent-Length:\x2052\r\nConnect
SF:ion:\x20close\r\n\r\n{\"kind\":\x20\"Status\",\x20\"status\":\x20\"Fail
SF:ure\",\x20\"code\":\x20404}")%r(HTTPOptions,C7,"HTTP/1\.1\x20200\x20OK\
SF:r\nServer:\x20Werkzeug/3\.1\.8\x20Python/3\.11\.2\r\nDate:\x20Fri,\x201
SF:0\x20Apr\x202026\x2006:25:23\x20GMT\r\nContent-Type:\x20text/html;\x20c
SF:harset=utf-8\r\nAllow:\x20GET,\x20HEAD,\x20OPTIONS\r\nContent-Length:\x
SF:200\r\nConnection:\x20close\r\n\r\n")%r(RTSPRequest,16C,"<!DOCTYPE\x20H
SF:TML>\n<html\x20lang=\"en\">\n\x20\x20\x20\x20<head>\n\x20\x20\x20\x20\x
SF:20\x20\x20\x20<meta\x20charset=\"utf-8\">\n\x20\x20\x20\x20\x20\x20\x20
SF:\x20<title>Error\x20response</title>\n\x20\x20\x20\x20</head>\n\x20\x20
SF:\x20\x20<body>\n\x20\x20\x20\x20\x20\x20\x20\x20<h1>Error\x20response</
SF:h1>\n\x20\x20\x20\x20\x20\x20\x20\x20<p>Error\x20code:\x20400</p>\n\x20
SF:\x20\x20\x20\x20\x20\x20\x20<p>Message:\x20Bad\x20request\x20version\x2
SF:0\('RTSP/1\.0'\)\.</p>\n\x20\x20\x20\x20\x20\x20\x20\x20<p>Error\x20cod
SF:e\x20explanation:\x20400\x20-\x20Bad\x20request\x20syntax\x20or\x20unsu
SF:pported\x20method\.</p>\n\x20\x20\x20\x20</body>\n</html>\n");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port9200-TCP:V=7.92%I=9%D=4/10%Time=69D897D3%P=x86_64-redhat-linux-gnu%
SF:r(GetRequest,293,"HTTP/1\.0\x20200\x20OK\r\nServer:\x20elasticsearch\x2
SF:0\r\nDate:\x20Fri,\x2010\x20Apr\x202026\x2006:25:23\x20GMT\r\nContent-T
SF:ype:\x20application/json;\x20charset=UTF-8\r\nContent-Length:\x20477\r\
SF:nX-elastic-product:\x20Elasticsearch\r\n\r\n{\"name\":\x20\"omega-decky
SF:\",\x20\"cluster_name\":\x20\"elasticsearch\",\x20\"cluster_uuid\":\x20
SF:\"xC3Pr9abTq2mNkOeLvXwYA\",\x20\"version\":\x20{\"number\":\x20\"7\.17\
SF:.9\",\x20\"build_flavor\":\x20\"default\",\x20\"build_type\":\x20\"dock
SF:er\",\x20\"build_hash\":\x20\"ef48222227ee6b9e70e502f0f0daa52435ee634d\
SF:",\x20\"build_date\":\x20\"2023-01-31T05:34:43\.305517834Z\",\x20\"buil
SF:d_snapshot\":\x20false,\x20\"lucene_version\":\x20\"8\.11\.1\",\x20\"mi
SF:nimum_wire_compatibility_version\":\x20\"6\.8\.0\",\x20\"minimum_index_
SF:compatibility_version\":\x20\"6\.0\.0-beta1\"},\x20\"tagline\":\x20\"Yo
SF:u\x20Know,\x20for\x20Search\"}")%r(HTTPOptions,223,"HTTP/1\.0\x20501\x2
SF:0Unsupported\x20method\x20\('OPTIONS'\)\r\nServer:\x20elasticsearch\x20
SF:\r\nDate:\x20Fri,\x2010\x20Apr\x202026\x2006:25:23\x20GMT\r\nConnection
SF::\x20close\r\nContent-Type:\x20text/html;charset=utf-8\r\nContent-Lengt
SF:h:\x20360\r\n\r\n<!DOCTYPE\x20HTML>\n<html\x20lang=\"en\">\n\x20\x20\x2
SF:0\x20<head>\n\x20\x20\x20\x20\x20\x20\x20\x20<meta\x20charset=\"utf-8\"
SF:>\n\x20\x20\x20\x20\x20\x20\x20\x20<title>Error\x20response</title>\n\x
SF:20\x20\x20\x20</head>\n\x20\x20\x20\x20<body>\n\x20\x20\x20\x20\x20\x20
SF:\x20\x20<h1>Error\x20response</h1>\n\x20\x20\x20\x20\x20\x20\x20\x20<p>
SF:Error\x20code:\x20501</p>\n\x20\x20\x20\x20\x20\x20\x20\x20<p>Message:\
SF:x20Unsupported\x20method\x20\('OPTIONS'\)\.</p>\n\x20\x20\x20\x20\x20\x
SF:20\x20\x20<p>Error\x20code\x20explanation:\x20501\x20-\x20Server\x20doe
SF:s\x20not\x20support\x20this\x20operation\.</p>\n\x20\x20\x20\x20</body>
SF:\n</html>\n")%r(RTSPRequest,16C,"<!DOCTYPE\x20HTML>\n<html\x20lang=\"en
SF:\">\n\x20\x20\x20\x20<head>\n\x20\x20\x20\x20\x20\x20\x20\x20<meta\x20c
SF:harset=\"utf-8\">\n\x20\x20\x20\x20\x20\x20\x20\x20<title>Error\x20resp
SF:onse</title>\n\x20\x20\x20\x20</head>\n\x20\x20\x20\x20<body>\n\x20\x20
SF:\x20\x20\x20\x20\x20\x20<h1>Error\x20response</h1>\n\x20\x20\x20\x20\x2
SF:0\x20\x20\x20<p>Error\x20code:\x20400</p>\n\x20\x20\x20\x20\x20\x20\x20
SF:\x20<p>Message:\x20Bad\x20request\x20version\x20\('RTSP/1\.0'\)\.</p>\n
SF:\x20\x20\x20\x20\x20\x20\x20\x20<p>Error\x20code\x20explanation:\x20400
SF:\x20-\x20Bad\x20request\x20syntax\x20or\x20unsupported\x20method\.</p>\
SF:n\x20\x20\x20\x20</body>\n</html>\n");
MAC Address: F2:5F:2F:EE:5B:96 (Unknown)
Service Info: Hosts: omega-decky, omega-decky
Host script results:
|_ms-sql-info: ERROR: Script execution failed (use -d to debug)
| smb2-time:
| date: 2026-04-10T06:33:53
|_ start_date: 2026-04-10T06:33:53
| smb-security-mode:
| account_used: guest
| authentication_level: user
| challenge_response: supported
|_ message_signing: disabled (dangerous, but default)
| smb2-security-mode:
| 2.0.2:
|_ Message signing enabled but not required
|_clock-skew: mean: -77663d15h16m57s, deviation: 109832d23h14m31s, median: -155327d06h33m54s
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 784.93 seconds

549
postpostfixnmap.txt Normal file
View File

@@ -0,0 +1,549 @@
# Nmap 7.92 scan initiated Sat Apr 11 04:21:11 2026 as: nmap -A -O -p- -sV -sC --version-intensity 9 -oN postpostfixnmap.txt 192.168.1.200,201
Nmap scan report for 192.168.1.200
Host is up (0.000031s latency).
Not shown: 65510 closed tcp ports (reset)
PORT STATE SERVICE VERSION
21/tcp open ftp vsftpd (before 2.0.8) or WU-FTPD
23/tcp open telnet?
| fingerprint-strings:
| DNSStatusRequestTCP, DNSVersionBindReqTCP, DistCCD, JavaRMI, LANDesk-RC, LDAPBindReq, NULL, NotesRPC, RPCCheck, Radmin, TLSSessionReq, TerminalServer, WMSRequest, X11Probe, mydoom, tn3270:
| login:
| FourOhFourRequest, GenericLines, GetRequest, HTTPOptions, LDAPSearchReq, RTSPRequest:
| login:
| Password:
| Login incorrect
| login:
| Hello, Help, Kerberos, LPDString, NessusTPv10, NessusTPv11, NessusTPv12, SSLSessionReq, SSLv23SessionReq, TerminalServerCookie, Verifier, VerifierAdvanced, WWWOFFLEctrlstat:
| login:
| Password:
| SIPOptions:
| login:
| Password:
| Login incorrect
| login: Password:
| Login incorrect
| login: Password:
| Login incorrect
| login: Password:
| Login incorrect
| login: Password:
| Login incorrect
|_ login: Password:
25/tcp open smtp Postfix smtpd
|_smtp-commands: omega-decky, PIPELINING, SIZE 10240000, VRFY, ETRN, AUTH PLAIN LOGIN, ENHANCEDSTATUSCODES, 8BITMIME, DSN
80/tcp open http Apache httpd 2.4.54
|_http-server-header: Werkzeug/3.1.8 Python/3.11.2
|_http-title: 403 Forbidden
110/tcp open pop3
|_pop3-capabilities: TOP AUTH-RESP-CODE SASL RESP-CODES UIDL USER
| fingerprint-strings:
| DNSStatusRequestTCP, DNSVersionBindReqTCP, GenericLines, NULL, RPCCheck, SMBProgNeg, X11Probe:
| +OK omega-decky Dovecot POP3 ready.
| FourOhFourRequest, GetRequest, HTTPOptions, Hello, Help, Kerberos, LPDString, RTSPRequest, SSLSessionReq, SSLv23SessionReq, TLSSessionReq, TerminalServerCookie:
| +OK omega-decky Dovecot POP3 ready.
| -ERR Command not recognized
| LDAPSearchReq:
| +OK omega-decky Dovecot POP3 ready.
| -ERR Command not recognized
|_ -ERR Command not recognized
143/tcp open imap Dovecot imapd
|_imap-capabilities: ENABLE LOGIN-REFERRALS ID completed SASL-IR CAPABILITY AUTH=PLAIN AUTH=LOGINA0001 IDLE OK LITERAL+ IMAP4rev1
389/tcp open ldap Cisco LDAP server
445/tcp open microsoft-ds
| fingerprint-strings:
| SMBProgNeg:
| SMBr
|_ "3DUfw
502/tcp open mbap?
1433/tcp open ms-sql-s?
1883/tcp open mqtt
| mqtt-subscribe:
| Topics and their most recent payloads:
| plant/alarm/pump_fault: 0
| plant/water/tank1/pressure: 2.65
| plant/alarm/high_pressure: 0
| plant/$SYS/broker/version: Mosquitto 2.0.15
| plant/alarm/low_chlorine: 0
| plant/water/valve/inlet/state: OPEN
| plant/water/chlorine/residual: 0.7
| plant/water/pump1/status: RUNNING
| plant/water/pump2/status: STANDBY
| plant/water/valve/drain/state: CLOSED
| plant/water/pump1/rpm: 1432
| plant/water/tank1/level: 77.9
| plant/water/chlorine/dosing: 1.2
|_ plant/$SYS/broker/uptime: 2847392
2121/tcp open ccproxy-ftp?
| fingerprint-strings:
| GenericLines:
| 200 FTP server ready.
| Command '
| understood
| NULL:
|_ 200 FTP server ready.
2375/tcp open docker Docker 24.0.5
| fingerprint-strings:
| GetRequest:
| HTTP/1.1 404 NOT FOUND
| Server: Werkzeug/3.1.8 Python/3.11.2
| Date: Sat, 11 Apr 2026 08:21:18 GMT
| Content-Type: application/json
| Content-Length: 46
| Connection: close
| {"message": "page not found", "response": 404}
| HTTPOptions:
| HTTP/1.1 200 OK
| Server: Werkzeug/3.1.8 Python/3.11.2
| Date: Sat, 11 Apr 2026 08:21:18 GMT
| Content-Type: text/html; charset=utf-8
| Allow: HEAD, GET, OPTIONS
| Content-Length: 0
| Connection: close
| Hello:
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 400</p>
| <p>Message: Bad request syntax ('EHLO').</p>
| <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
| </body>
| </html>
| RTSPRequest:
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 400</p>
| <p>Message: Bad request version ('RTSP/1.0').</p>
| <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
| </body>
| </html>
| docker:
| HTTP/1.1 200 OK
| Server: Werkzeug/3.1.8 Python/3.11.2
| Date: Sat, 11 Apr 2026 08:21:18 GMT
| Content-Type: application/json
| Content-Length: 187
| Connection: close
|_ {"Version": "24.0.5", "ApiVersion": "1.43", "MinAPIVersion": "1.12", "GitCommit": "ced0996", "GoVersion": "go1.20.6", "Os": "linux", "Arch": "amd64", "KernelVersion": "5.15.0-76-generic"}
| docker-version:
| GitCommit: ced0996
| GoVersion: go1.20.6
| KernelVersion: 5.15.0-76-generic
| Version: 24.0.5
| Arch: amd64
| MinAPIVersion: 1.12
| ApiVersion: 1.43
|_ Os: linux
3306/tcp open mysql MySQL 5.7.38-log
| mysql-info:
| Protocol: 10
| Version: 5.7.38-log
| Thread ID: 1
| Capabilities flags: 63487
| Some Capabilities: LongPassword, LongColumnFlag, IgnoreSpaceBeforeParenthesis, SupportsLoadDataLocal, InteractiveClient, Speaks41ProtocolOld, SupportsCompression, Speaks41ProtocolNew, IgnoreSigpipes, DontAllowDatabaseTableColumn, SupportsTransactions, Support41Auth, ODBCClient, ConnectWithDatabase, FoundRows, SupportsAuthPlugins, SupportsMultipleStatments, SupportsMultipleResults
| Status: Autocommit
| Salt: pv!magic!O}%>UM|gu^1
|_ Auth Plugin Name: mysql_native_password
3389/tcp open ms-wbt-server xrdp
5060/tcp open sip (SIP end point; Status: 401 Unauthorized)
| fingerprint-strings:
| HTTPOptions:
| SIP/2.0 401 Unauthorized
| Via:
| From:
| Call-ID:
| CSeq:
| WWW-Authenticate: Digest realm="omega-decky", nonce="39b4807e4f2565a7", algorithm=MD5
| Content-Length: 0
| RTSPRequest:
| SIP/2.0 401 Unauthorized
| Via:
| From:
| Call-ID:
| CSeq:
| WWW-Authenticate: Digest realm="omega-decky", nonce="73b517049d1e9586", algorithm=MD5
| Content-Length: 0
| SIPOptions:
| SIP/2.0 401 Unauthorized
| Via: SIP/2.0/TCP nm;branch=foo
| From: <sip:nm@nm>;tag=root
| <sip:nm2@nm2>
| Call-ID: 50000
| CSeq: 42 OPTIONS
| WWW-Authenticate: Digest realm="omega-decky", nonce="4895a904f454dcfb", algorithm=MD5
|_ Content-Length: 0
5432/tcp open postgresql?
5900/tcp open vnc VNC (protocol 3.8)
| vnc-info:
| Protocol version: 3.8
| Security types:
|_ VNC Authentication (2)
6379/tcp open redis?
| fingerprint-strings:
| HELP4STOMP, HTTPOptions, Hello, Help, Kerberos, LPDString, Memcache, NessusTPv10, NessusTPv11, NessusTPv12, RTSPRequest, SSLSessionReq, SSLv23SessionReq, Socks5, SqueezeCenter_CLI, TLSSessionReq, TerminalServerCookie, Verifier, VerifierAdvanced, WWWOFFLEctrlstat, ajp, dominoconsole, firebird:
| -ERR unknown command
| LDAPSearchReq, hp-pjl, pervasive-btrieve:
| -ERR unknown command
| -ERR unknown command
| SIPOptions:
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| -ERR unknown command
| redis-server:
| $150
| Server
| redis_version:7.2.7
| redis_mode:standalone
| os:Linux 5.15.0
| arch_bits:64
| tcp_port:6379
| uptime_in_seconds:864000
| connected_clients:1
|_ Keyspace
6443/tcp open sun-sr-https?
| fingerprint-strings:
| GetRequest:
| HTTP/1.1 404 NOT FOUND
| Server: Werkzeug/3.1.8 Python/3.11.2
| Date: Sat, 11 Apr 2026 08:21:18 GMT
| Content-Type: application/json
| Content-Length: 52
| Connection: close
| {"kind": "Status", "status": "Failure", "code": 404}
| HTTPOptions:
| HTTP/1.1 200 OK
| Server: Werkzeug/3.1.8 Python/3.11.2
| Date: Sat, 11 Apr 2026 08:21:18 GMT
| Content-Type: text/html; charset=utf-8
| Allow: HEAD, GET, OPTIONS
| Content-Length: 0
| Connection: close
| RTSPRequest:
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 400</p>
| <p>Message: Bad request version ('RTSP/1.0').</p>
| <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
| </body>
| </html>
| SSLSessionReq:
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 400</p>
| <p>Message: Bad request syntax ('
| &lt;=
| ').</p>
| <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
| </body>
|_ </html>
8800/tcp open sunwebadmin?
| fingerprint-strings:
| GetRequest:
| HTTP/1.1 302 Found
| Date: Sat, 11 Apr 2026 08:17:44 GMT
| Content-Type: text/html
| Location: /index.html
| Content-Length: 0
| HTTPOptions:
| HTTP/1.1 200 OK
| Date: Sat, 11 Apr 2026 08:17:44 GMT
| Allow: GET,HEAD,POST,OPTIONS,TRACE
| Content-Length: 0
| Connection: close
|_ Content-Type: text/html
9200/tcp open wap-wsp?
| fingerprint-strings:
| GetRequest:
| HTTP/1.0 200 OK
| Server: elasticsearch
| Date: Sat, 11 Apr 2026 08:21:18 GMT
| Content-Type: application/json; charset=UTF-8
| Content-Length: 477
| X-elastic-product: Elasticsearch
| {"name": "omega-decky", "cluster_name": "elasticsearch", "cluster_uuid": "xC3Pr9abTq2mNkOeLvXwYA", "version": {"number": "7.17.9", "build_flavor": "default", "build_type": "docker", "build_hash": "ef48222227ee6b9e70e502f0f0daa52435ee634d", "build_date": "2023-01-31T05:34:43.305517834Z", "build_snapshot": false, "lucene_version": "8.11.1", "minimum_wire_compatibility_version": "6.8.0", "minimum_index_compatibility_version": "6.0.0-beta1"}, "tagline": "You Know, for Search"}
| HTTPOptions:
| HTTP/1.0 501 Unsupported method ('OPTIONS')
| Server: elasticsearch
| Date: Sat, 11 Apr 2026 08:21:18 GMT
| Connection: close
| Content-Type: text/html;charset=utf-8
| Content-Length: 360
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 501</p>
| <p>Message: Unsupported method ('OPTIONS').</p>
| <p>Error code explanation: 501 - Server does not support this operation.</p>
| </body>
| </html>
| RTSPRequest:
| <!DOCTYPE HTML>
| <html lang="en">
| <head>
| <meta charset="utf-8">
| <title>Error response</title>
| </head>
| <body>
| <h1>Error response</h1>
| <p>Error code: 400</p>
| <p>Message: Bad request version ('RTSP/1.0').</p>
| <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
| </body>
|_ </html>
10201/tcp open rsms?
27017/tcp open mongod?
|_mongodb-info: ERROR: Script execution failed (use -d to debug)
|_mongodb-databases: ERROR: Script execution failed (use -d to debug)
44818/tcp open EtherNetIP-2?
9 services unrecognized despite returning data. If you know the service/version, please submit the following fingerprints at https://nmap.org/cgi-bin/submit.cgi?new-service :
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port23-TCP:V=7.92%I=9%D=4/11%Time=69DA047E%P=x86_64-redhat-linux-gnu%r(
SF:NULL,7,"login:\x20")%r(GenericLines,2C,"login:\x20\xff\xfb\x01Password:
SF:\x20\nLogin\x20incorrect\nlogin:\x20")%r(tn3270,16,"login:\x20\xff\xfe\
SF:x18\xff\xfe\x19\xff\xfc\x19\xff\xfe\0\xff\xfc\0")%r(GetRequest,2C,"logi
SF:n:\x20\xff\xfb\x01Password:\x20\nLogin\x20incorrect\nlogin:\x20")%r(HTT
SF:POptions,2C,"login:\x20\xff\xfb\x01Password:\x20\nLogin\x20incorrect\nl
SF:ogin:\x20")%r(RTSPRequest,2C,"login:\x20\xff\xfb\x01Password:\x20\nLogi
SF:n\x20incorrect\nlogin:\x20")%r(RPCCheck,7,"login:\x20")%r(DNSVersionBin
SF:dReqTCP,7,"login:\x20")%r(DNSStatusRequestTCP,7,"login:\x20")%r(Hello,1
SF:4,"login:\x20\xff\xfb\x01Password:\x20")%r(Help,14,"login:\x20\xff\xfb\
SF:x01Password:\x20")%r(SSLSessionReq,14,"login:\x20\xff\xfb\x01Password:\
SF:x20")%r(TerminalServerCookie,14,"login:\x20\xff\xfb\x01Password:\x20")%
SF:r(TLSSessionReq,7,"login:\x20")%r(SSLv23SessionReq,14,"login:\x20\xff\x
SF:fb\x01Password:\x20")%r(Kerberos,14,"login:\x20\xff\xfb\x01Password:\x2
SF:0")%r(X11Probe,7,"login:\x20")%r(FourOhFourRequest,2C,"login:\x20\xff\x
SF:fb\x01Password:\x20\nLogin\x20incorrect\nlogin:\x20")%r(LPDString,14,"l
SF:ogin:\x20\xff\xfb\x01Password:\x20")%r(LDAPSearchReq,2C,"login:\x20\xff
SF:\xfb\x01Password:\x20\nLogin\x20incorrect\nlogin:\x20")%r(LDAPBindReq,7
SF:,"login:\x20")%r(SIPOptions,BE,"login:\x20\xff\xfb\x01Password:\x20\nLo
SF:gin\x20incorrect\nlogin:\x20Password:\x20\nLogin\x20incorrect\nlogin:\x
SF:20Password:\x20\nLogin\x20incorrect\nlogin:\x20Password:\x20\nLogin\x20
SF:incorrect\nlogin:\x20Password:\x20\nLogin\x20incorrect\nlogin:\x20Passw
SF:ord:\x20")%r(LANDesk-RC,7,"login:\x20")%r(TerminalServer,7,"login:\x20"
SF:)%r(NotesRPC,7,"login:\x20")%r(DistCCD,7,"login:\x20")%r(JavaRMI,7,"log
SF:in:\x20")%r(Radmin,7,"login:\x20")%r(NessusTPv12,14,"login:\x20\xff\xfb
SF:\x01Password:\x20")%r(NessusTPv11,14,"login:\x20\xff\xfb\x01Password:\x
SF:20")%r(NessusTPv10,14,"login:\x20\xff\xfb\x01Password:\x20")%r(WMSReque
SF:st,7,"login:\x20")%r(mydoom,7,"login:\x20")%r(WWWOFFLEctrlstat,14,"logi
SF:n:\x20\xff\xfb\x01Password:\x20")%r(Verifier,14,"login:\x20\xff\xfb\x01
SF:Password:\x20")%r(VerifierAdvanced,14,"login:\x20\xff\xfb\x01Password:\
SF:x20");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port110-TCP:V=7.92%I=9%D=4/11%Time=69DA047E%P=x86_64-redhat-linux-gnu%r
SF:(NULL,25,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x20ready\.\r\n")%r(Gen
SF:ericLines,25,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x20ready\.\r\n")%r
SF:(GetRequest,42,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x20ready\.\r\n-E
SF:RR\x20Command\x20not\x20recognized\r\n")%r(HTTPOptions,42,"\+OK\x20omeg
SF:a-decky\x20Dovecot\x20POP3\x20ready\.\r\n-ERR\x20Command\x20not\x20reco
SF:gnized\r\n")%r(RTSPRequest,42,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x
SF:20ready\.\r\n-ERR\x20Command\x20not\x20recognized\r\n")%r(RPCCheck,25,"
SF:\+OK\x20omega-decky\x20Dovecot\x20POP3\x20ready\.\r\n")%r(DNSVersionBin
SF:dReqTCP,25,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x20ready\.\r\n")%r(D
SF:NSStatusRequestTCP,25,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x20ready\
SF:.\r\n")%r(Hello,42,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x20ready\.\r
SF:\n-ERR\x20Command\x20not\x20recognized\r\n")%r(Help,42,"\+OK\x20omega-d
SF:ecky\x20Dovecot\x20POP3\x20ready\.\r\n-ERR\x20Command\x20not\x20recogni
SF:zed\r\n")%r(SSLSessionReq,42,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x2
SF:0ready\.\r\n-ERR\x20Command\x20not\x20recognized\r\n")%r(TerminalServer
SF:Cookie,42,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x20ready\.\r\n-ERR\x2
SF:0Command\x20not\x20recognized\r\n")%r(TLSSessionReq,42,"\+OK\x20omega-d
SF:ecky\x20Dovecot\x20POP3\x20ready\.\r\n-ERR\x20Command\x20not\x20recogni
SF:zed\r\n")%r(SSLv23SessionReq,42,"\+OK\x20omega-decky\x20Dovecot\x20POP3
SF:\x20ready\.\r\n-ERR\x20Command\x20not\x20recognized\r\n")%r(Kerberos,42
SF:,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x20ready\.\r\n-ERR\x20Command\
SF:x20not\x20recognized\r\n")%r(SMBProgNeg,25,"\+OK\x20omega-decky\x20Dove
SF:cot\x20POP3\x20ready\.\r\n")%r(X11Probe,25,"\+OK\x20omega-decky\x20Dove
SF:cot\x20POP3\x20ready\.\r\n")%r(FourOhFourRequest,42,"\+OK\x20omega-deck
SF:y\x20Dovecot\x20POP3\x20ready\.\r\n-ERR\x20Command\x20not\x20recognized
SF:\r\n")%r(LPDString,42,"\+OK\x20omega-decky\x20Dovecot\x20POP3\x20ready\
SF:.\r\n-ERR\x20Command\x20not\x20recognized\r\n")%r(LDAPSearchReq,5F,"\+O
SF:K\x20omega-decky\x20Dovecot\x20POP3\x20ready\.\r\n-ERR\x20Command\x20no
SF:t\x20recognized\r\n-ERR\x20Command\x20not\x20recognized\r\n");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port445-TCP:V=7.92%I=9%D=4/11%Time=69DA0483%P=x86_64-redhat-linux-gnu%r
SF:(SMBProgNeg,51,"\0\0\0M\xffSMBr\0\0\0\0\x80\0\xc0\0\0\0\0\0\0\0\0\0\0\0
SF:\0\0\0@\x06\0\0\x01\0\x11\x07\0\x03\x01\0\x01\0\0\xfa\0\0\0\0\x01\0\0\0
SF:\0\0p\0\0\0\0\0\0\0\0\0\0\0\0\0\x08\x08\0\x11\"3DUfw\x88");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port1433-TCP:V=7.92%I=9%D=4/11%Time=69DA0483%P=x86_64-redhat-linux-gnu%
SF:r(ms-sql-s,2F,"\x04\x01\0/\0\0\x01\0\0\0\x1a\0\x06\x01\0\x20\0\x01\x02\
SF:0!\0\x01\x03\0\"\0\x04\x04\0&\0\x01\xff\x0e\0\x07\xd0\0\0\x02\0\0\0\0\0
SF:\0");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port2121-TCP:V=7.92%I=9%D=4/11%Time=69DA047E%P=x86_64-redhat-linux-gnu%
SF:r(NULL,17,"200\x20FTP\x20server\x20ready\.\r\n")%r(GenericLines,3A,"200
SF:\x20FTP\x20server\x20ready\.\r\n500\x20Command\x20'\\r\\n'\x20not\x20un
SF:derstood\r\n");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port5060-TCP:V=7.92%I=9%D=4/11%Time=69DA048A%P=x86_64-redhat-linux-gnu%
SF:r(SIPOptions,F7,"SIP/2\.0\x20401\x20Unauthorized\r\nVia:\x20SIP/2\.0/TC
SF:P\x20nm;branch=foo\r\nFrom:\x20<sip:nm@nm>;tag=root\r\nTo:\x20<sip:nm2@
SF:nm2>\r\nCall-ID:\x2050000\r\nCSeq:\x2042\x20OPTIONS\r\nWWW-Authenticate
SF::\x20Digest\x20realm=\"omega-decky\",\x20nonce=\"4895a904f454dcfb\",\x2
SF:0algorithm=MD5\r\nContent-Length:\x200\r\n\r\n")%r(HTTPOptions,AE,"SIP/
SF:2\.0\x20401\x20Unauthorized\r\nVia:\x20\r\nFrom:\x20\r\nTo:\x20\r\nCall
SF:-ID:\x20\r\nCSeq:\x20\r\nWWW-Authenticate:\x20Digest\x20realm=\"omega-d
SF:ecky\",\x20nonce=\"39b4807e4f2565a7\",\x20algorithm=MD5\r\nContent-Leng
SF:th:\x200\r\n\r\n")%r(RTSPRequest,AE,"SIP/2\.0\x20401\x20Unauthorized\r\
SF:nVia:\x20\r\nFrom:\x20\r\nTo:\x20\r\nCall-ID:\x20\r\nCSeq:\x20\r\nWWW-A
SF:uthenticate:\x20Digest\x20realm=\"omega-decky\",\x20nonce=\"73b517049d1
SF:e9586\",\x20algorithm=MD5\r\nContent-Length:\x200\r\n\r\n");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port5432-TCP:V=7.92%I=9%D=4/11%Time=69DA048D%P=x86_64-redhat-linux-gnu%
SF:r(SMBProgNeg,D,"R\0\0\0\x0c\0\0\0\x059=\xdb\x16")%r(Kerberos,D,"R\0\0\0
SF:\x0c\0\0\0\x05\xae>;\xd5")%r(ZendJavaBridge,D,"R\0\0\0\x0c\0\0\0\x05\x8
SF:3l\x7f\x8c");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port6379-TCP:V=7.92%I=9%D=4/11%Time=69DA0483%P=x86_64-redhat-linux-gnu%
SF:r(redis-server,9E,"\$150\r\n#\x20Server\nredis_version:7\.2\.7\nredis_m
SF:ode:standalone\nos:Linux\x205\.15\.0\narch_bits:64\ntcp_port:6379\nupti
SF:me_in_seconds:864000\nconnected_clients:1\n#\x20Keyspace\n\r\n")%r(GetR
SF:equest,5,"\$-1\r\n")%r(HTTPOptions,16,"-ERR\x20unknown\x20command\r\n")
SF:%r(RTSPRequest,16,"-ERR\x20unknown\x20command\r\n")%r(Hello,16,"-ERR\x2
SF:0unknown\x20command\r\n")%r(Help,16,"-ERR\x20unknown\x20command\r\n")%r
SF:(SSLSessionReq,16,"-ERR\x20unknown\x20command\r\n")%r(TerminalServerCoo
SF:kie,16,"-ERR\x20unknown\x20command\r\n")%r(TLSSessionReq,16,"-ERR\x20un
SF:known\x20command\r\n")%r(SSLv23SessionReq,16,"-ERR\x20unknown\x20comman
SF:d\r\n")%r(Kerberos,16,"-ERR\x20unknown\x20command\r\n")%r(FourOhFourReq
SF:uest,5,"\$-1\r\n")%r(LPDString,16,"-ERR\x20unknown\x20command\r\n")%r(L
SF:DAPSearchReq,2C,"-ERR\x20unknown\x20command\r\n-ERR\x20unknown\x20comma
SF:nd\r\n")%r(SIPOptions,DC,"-ERR\x20unknown\x20command\r\n-ERR\x20unknown
SF:\x20command\r\n-ERR\x20unknown\x20command\r\n-ERR\x20unknown\x20command
SF:\r\n-ERR\x20unknown\x20command\r\n-ERR\x20unknown\x20command\r\n-ERR\x2
SF:0unknown\x20command\r\n-ERR\x20unknown\x20command\r\n-ERR\x20unknown\x2
SF:0command\r\n-ERR\x20unknown\x20command\r\n")%r(NessusTPv12,16,"-ERR\x20
SF:unknown\x20command\r\n")%r(NessusTPv11,16,"-ERR\x20unknown\x20command\r
SF:\n")%r(NessusTPv10,16,"-ERR\x20unknown\x20command\r\n")%r(WWWOFFLEctrls
SF:tat,16,"-ERR\x20unknown\x20command\r\n")%r(Verifier,16,"-ERR\x20unknown
SF:\x20command\r\n")%r(VerifierAdvanced,16,"-ERR\x20unknown\x20command\r\n
SF:")%r(Socks5,16,"-ERR\x20unknown\x20command\r\n")%r(OfficeScan,5,"\$-1\r
SF:\n")%r(HELP4STOMP,16,"-ERR\x20unknown\x20command\r\n")%r(Memcache,16,"-
SF:ERR\x20unknown\x20command\r\n")%r(firebird,16,"-ERR\x20unknown\x20comma
SF:nd\r\n")%r(pervasive-btrieve,2C,"-ERR\x20unknown\x20command\r\n-ERR\x20
SF:unknown\x20command\r\n")%r(ajp,16,"-ERR\x20unknown\x20command\r\n")%r(h
SF:p-pjl,2C,"-ERR\x20unknown\x20command\r\n-ERR\x20unknown\x20command\r\n"
SF:)%r(SqueezeCenter_CLI,16,"-ERR\x20unknown\x20command\r\n")%r(dominocons
SF:ole,16,"-ERR\x20unknown\x20command\r\n");
==============NEXT SERVICE FINGERPRINT (SUBMIT INDIVIDUALLY)==============
SF-Port6443-TCP:V=7.92%I=9%D=4/11%Time=69DA047E%P=x86_64-redhat-linux-gnu%
SF:r(SSLSessionReq,1E8,"<!DOCTYPE\x20HTML>\n<html\x20lang=\"en\">\n\x20\x2
SF:0\x20\x20<head>\n\x20\x20\x20\x20\x20\x20\x20\x20<meta\x20charset=\"utf
SF:-8\">\n\x20\x20\x20\x20\x20\x20\x20\x20<title>Error\x20response</title>
SF:\n\x20\x20\x20\x20</head>\n\x20\x20\x20\x20<body>\n\x20\x20\x20\x20\x20
SF:\x20\x20\x20<h1>Error\x20response</h1>\n\x20\x20\x20\x20\x20\x20\x20\x2
SF:0<p>Error\x20code:\x20400</p>\n\x20\x20\x20\x20\x20\x20\x20\x20<p>Messa
SF:ge:\x20Bad\x20request\x20syntax\x20\('\\x16\\x03\\x00\\x00S\\x01\\x00\\
SF:x00O\\x03\\x00\?G\xc3\x97\xc3\xb7\xc2\xba,\xc3\xae\xc3\xaa\xc2\xb2`~\xc
SF:3\xb3\\x00\xc3\xbd\\x82{\xc2\xb9\xc3\x95\\x96\xc3\x88w\\x9b\xc3\xa6\xc3
SF:\x84\xc3\x9b&lt;=\xc3\x9bo\xc3\xaf\\x10n\\x00\\x00\(\\x00\\x16\\x00\\x1
SF:3\\x00'\)\.</p>\n\x20\x20\x20\x20\x20\x20\x20\x20<p>Error\x20code\x20ex
SF:planation:\x20400\x20-\x20Bad\x20request\x20syntax\x20or\x20unsupported
SF:\x20method\.</p>\n\x20\x20\x20\x20</body>\n</html>\n")%r(GetRequest,E0,
SF:"HTTP/1\.1\x20404\x20NOT\x20FOUND\r\nServer:\x20Werkzeug/3\.1\.8\x20Pyt
SF:hon/3\.11\.2\r\nDate:\x20Sat,\x2011\x20Apr\x202026\x2008:21:18\x20GMT\r
SF:\nContent-Type:\x20application/json\r\nContent-Length:\x2052\r\nConnect
SF:ion:\x20close\r\n\r\n{\"kind\":\x20\"Status\",\x20\"status\":\x20\"Fail
SF:ure\",\x20\"code\":\x20404}")%r(HTTPOptions,C7,"HTTP/1\.1\x20200\x20OK\
SF:r\nServer:\x20Werkzeug/3\.1\.8\x20Python/3\.11\.2\r\nDate:\x20Sat,\x201
SF:1\x20Apr\x202026\x2008:21:18\x20GMT\r\nContent-Type:\x20text/html;\x20c
SF:harset=utf-8\r\nAllow:\x20HEAD,\x20GET,\x20OPTIONS\r\nContent-Length:\x
SF:200\r\nConnection:\x20close\r\n\r\n")%r(RTSPRequest,16C,"<!DOCTYPE\x20H
SF:TML>\n<html\x20lang=\"en\">\n\x20\x20\x20\x20<head>\n\x20\x20\x20\x20\x
SF:20\x20\x20\x20<meta\x20charset=\"utf-8\">\n\x20\x20\x20\x20\x20\x20\x20
SF:\x20<title>Error\x20response</title>\n\x20\x20\x20\x20</head>\n\x20\x20
SF:\x20\x20<body>\n\x20\x20\x20\x20\x20\x20\x20\x20<h1>Error\x20response</
SF:h1>\n\x20\x20\x20\x20\x20\x20\x20\x20<p>Error\x20code:\x20400</p>\n\x20
SF:\x20\x20\x20\x20\x20\x20\x20<p>Message:\x20Bad\x20request\x20version\x2
SF:0\('RTSP/1\.0'\)\.</p>\n\x20\x20\x20\x20\x20\x20\x20\x20<p>Error\x20cod
SF:e\x20explanation:\x20400\x20-\x20Bad\x20request\x20syntax\x20or\x20unsu
SF:pported\x20method\.</p>\n\x20\x20\x20\x20</body>\n</html>\n");
MAC Address: 5A:84:B9:11:A3:E8 (Unknown)
Device type: general purpose
Running: Linux 5.X
OS CPE: cpe:/o:linux:linux_kernel:5
OS details: Linux 5.3 - 5.4
Network Distance: 1 hop
Service Info: Hosts: omega-decky, omega-decky
Host script results:
| smb2-security-mode:
| 2.0.2:
|_ Message signing enabled but not required
|_clock-skew: mean: -77664d04h15m02s, deviation: 109833d17h34m55s, median: -155328d08h30m05s
| smb2-time:
| date: 2026-04-11T08:30:06
|_ start_date: 2026-04-11T08:30:06
| smb-security-mode:
| account_used: guest
| authentication_level: user
| challenge_response: supported
|_ message_signing: disabled (dangerous, but default)
|_ms-sql-info: ERROR: Script execution failed (use -d to debug)
TRACEROUTE
HOP RTT ADDRESS
1 0.03 ms 192.168.1.200
Nmap scan report for 192.168.1.201
Host is up (0.000037s latency).
Not shown: 65534 closed tcp ports (reset)
PORT STATE SERVICE VERSION
25/tcp open smtp Postfix smtpd
|_smtp-commands: relay-decky, PIPELINING, SIZE 10240000, VRFY, ETRN, AUTH PLAIN LOGIN, ENHANCEDSTATUSCODES, 8BITMIME, DSN
MAC Address: 0E:84:8E:09:6A:47 (Unknown)
No exact OS matches for host (If you know what OS is running on it, see https://nmap.org/submit/ ).
TCP/IP fingerprint:
OS:SCAN(V=7.92%E=4%D=4/11%OT=25%CT=1%CU=38325%PV=Y%DS=1%DC=D%G=Y%M=0E848E%T
OS:M=69DA07BC%P=x86_64-redhat-linux-gnu)SEQ(SP=101%GCD=1%ISR=10F%TI=Z%CI=Z%
OS:TS=A)SEQ(SP=101%GCD=1%ISR=10F%TI=Z%CI=Z%II=I%TS=A)OPS(O1=M5B4ST11NWA%O2=
OS:M5B4ST11NWA%O3=M5B4NNT11NWA%O4=M5B4ST11NWA%O5=M5B4ST11NWA%O6=M5B4ST11)WI
OS:N(W1=FE88%W2=FE88%W3=FE88%W4=FE88%W5=FE88%W6=FE88)ECN(R=Y%DF=Y%T=40%W=FA
OS:F0%O=M5B4NNSNWA%CC=Y%Q=)T1(R=Y%DF=Y%T=40%S=O%A=S+%F=AS%RD=0%Q=)T2(R=N)T3
OS:(R=N)T4(R=Y%DF=Y%T=40%W=0%S=A%A=Z%F=R%O=%RD=0%Q=)T5(R=Y%DF=Y%T=40%W=0%S=
OS:Z%A=S+%F=AR%O=%RD=0%Q=)T6(R=Y%DF=Y%T=40%W=0%S=A%A=Z%F=R%O=%RD=0%Q=)T7(R=
OS:Y%DF=Y%T=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=)U1(R=Y%DF=N%T=40%IPL=164%UN=0%R
OS:IPL=G%RID=G%RIPCK=G%RUCK=G%RUD=G)IE(R=Y%DFI=N%T=40%CD=S)
Network Distance: 1 hop
Service Info: Host: relay-decky
TRACEROUTE
HOP RTT ADDRESS
1 0.04 ms 192.168.1.201
OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
# Nmap done at Sat Apr 11 04:35:08 2026 -- 2 IP addresses (2 hosts up) scanned in 836.75 seconds

View File

@@ -20,6 +20,7 @@ dependencies = [
"bcrypt>=4.1.0", "bcrypt>=4.1.0",
"psutil>=5.9.0", "psutil>=5.9.0",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",
"sqlmodel>=0.0.16",
] ]
[project.optional-dependencies] [project.optional-dependencies]
@@ -30,11 +31,58 @@ dev = [
"pip-audit>=2.0", "pip-audit>=2.0",
"httpx>=0.27.0", "httpx>=0.27.0",
"hypothesis>=6.0", "hypothesis>=6.0",
"pytest-cov>=7.0",
"pytest-asyncio>=1.0",
"freezegun>=1.5",
"schemathesis>=4.0",
"pytest-xdist>=3.8.0",
]
live = [
"flask>=3.0",
"twisted>=24.0",
"requests>=2.32",
"redis>=5.0",
"pymysql>=1.1",
"psycopg2-binary>=2.9",
"paho-mqtt>=2.0",
"pymongo>=4.0",
] ]
[project.scripts] [project.scripts]
decnet = "decnet.cli:app" decnet = "decnet.cli:app"
[tool.pytest.ini_options]
asyncio_mode = "auto"
addopts = "-m 'not fuzz and not live' -v -q -x -n logical"
markers = [
"fuzz: hypothesis-based fuzz tests (slow, run with -m fuzz or -m '' for all)",
"live: live subprocess service tests (run with -m live)",
"live_docker: live Docker container tests (requires DECNET_LIVE_DOCKER=1)",
]
filterwarnings = [
"ignore::pytest.PytestUnhandledThreadExceptionWarning",
"ignore::DeprecationWarning",
]
[tool.coverage.run]
source = ["decnet"]
omit = ["*/tests/*", "templates/*"]
parallel = true
[tool.coverage.report]
show_missing = true
skip_covered = false
# Run with: pytest --cov --cov-report=term-missing
[tool.setuptools.packages.find] [tool.setuptools.packages.find]
where = ["."] where = ["."]
include = ["decnet*"] include = ["decnet*"]
[tool.bandit]
exclude_dirs = [
"templates/http/decnet_logging.py",
"templates/imap/decnet_logging.py",
"templates/pop3/decnet_logging.py",
"templates/real_ssh/decnet_logging.py",
"templates/smtp/decnet_logging.py",
]

83
requirements.lock Normal file
View File

@@ -0,0 +1,83 @@
aiosqlite==0.22.1
annotated-doc==0.0.4
annotated-types==0.7.0
anyio==4.13.0
attrs==26.1.0
bandit==1.9.4
bcrypt==5.0.0
boolean.py==5.0
CacheControl==0.14.4
certifi==2026.2.25
charset-normalizer==3.4.7
click==8.3.2
cyclonedx-python-lib==11.7.0
defusedxml==0.7.1
docker==7.1.0
execnet==2.1.2
fastapi==0.135.3
filelock==3.25.2
freezegun==1.5.5
graphql-core==3.2.8
greenlet==3.4.0
h11==0.16.0
harfile==0.4.0
httpcore==1.0.9
httpx==0.28.1
hypothesis==6.151.12
hypothesis-graphql==0.12.0
hypothesis-jsonschema==0.23.1
idna==3.11
iniconfig==2.3.0
Jinja2==3.1.6
jsonschema==4.26.0
jsonschema_rs==0.45.1
jsonschema-specifications==2025.9.1
junit-xml==1.9
license-expression==30.4.4
markdown-it-py==4.0.0
MarkupSafe==3.0.3
mdurl==0.1.2
msgpack==1.1.2
packageurl-python==0.17.6
packaging==26.0
pip-api==0.0.34
pip_audit==2.10.0
pip-requirements-parser==32.0.1
platformdirs==4.9.4
pluggy==1.6.0
psutil==7.2.2
pydantic==2.12.5
pydantic_core==2.41.5
Pygments==2.20.0
PyJWT==2.12.1
pyparsing==3.3.2
pyrate-limiter==4.1.0
py-serializable==2.1.0
pytest==9.0.3
pytest-xdist==3.8.0
python-dateutil==2.9.0.post0
python-dotenv==1.2.2
PyYAML==6.0.3
referencing==0.37.0
requests==2.33.1
rich==14.3.3
rpds-py==0.30.0
ruff==0.15.9
schemathesis==4.15.0
shellingham==1.5.4
six==1.17.0
sortedcontainers==2.4.0
SQLAlchemy==2.0.49
sqlmodel==0.0.38
starlette==1.0.0
starlette-testclient==0.4.1
stevedore==5.7.0
tenacity==9.1.4
tomli==2.4.1
tomli_w==1.2.0
typer==0.24.1
typing_extensions==4.15.0
typing-inspection==0.4.2
urllib3==2.6.3
uvicorn==0.44.0
Werkzeug==3.1.8

View File

@@ -0,0 +1,28 @@
ARG BASE_IMAGE=honeynet/conpot:latest
FROM ${BASE_IMAGE}
USER root
# Replace 5020 with 502 in all templates so Modbus binds on the standard port
RUN find /opt /usr /etc /home -name "*.xml" -exec sed -i 's/<port>5020<\/port>/<port>502<\/port>/g' {} + 2>/dev/null || true
RUN find /opt /usr /etc /home -name "*.xml" -exec sed -i 's/port="5020"/port="502"/g' {} + 2>/dev/null || true
# Install libcap and give the Python interpreter permission to bind ports < 1024
RUN (apt-get update && apt-get install -y --no-install-recommends libcap2-bin 2>/dev/null) || (apk add --no-cache libcap 2>/dev/null) || true
RUN find /home/conpot/.local/bin /usr /opt -type f -name 'python*' -exec setcap 'cap_net_bind_service+eip' {} \; 2>/dev/null || true
# Bridge conpot's own logger into DECNET's RFC 5424 syslog pipeline.
# entrypoint.py is self-contained (inlines the formatter) because the
# conpot base image runs Python 3.6, which cannot import the shared
# decnet_logging.py (that file uses 3.9+ / 3.10+ type syntax).
COPY entrypoint.py /home/conpot/entrypoint.py
RUN chown conpot:conpot /home/conpot/entrypoint.py \
&& chmod +x /home/conpot/entrypoint.py
# The upstream image already runs as non-root 'conpot'.
# We do NOT switch to a 'decnet' user — doing so breaks pkg_resources
# because conpot's eggs live under /home/conpot/.local and are only on
# the Python path for that user.
USER conpot
ENTRYPOINT ["/usr/bin/python3", "/home/conpot/entrypoint.py"]

View File

@@ -0,0 +1,89 @@
#!/usr/bin/env python3
"""
Shared RFC 5424 syslog helper for DECNET service templates.
Services call syslog_line() to format an RFC 5424 message, then
write_syslog_file() to emit it to stdout — Docker captures it, and the
host-side collector streams it into the log file.
RFC 5424 structure:
<PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG
Facility: local0 (16), PEN for SD element ID: decnet@55555
"""
from datetime import datetime, timezone
from typing import Any
# ─── Constants ────────────────────────────────────────────────────────────────
_FACILITY_LOCAL0 = 16
_SD_ID = "decnet@55555"
_NILVALUE = "-"
SEVERITY_EMERG = 0
SEVERITY_ALERT = 1
SEVERITY_CRIT = 2
SEVERITY_ERROR = 3
SEVERITY_WARNING = 4
SEVERITY_NOTICE = 5
SEVERITY_INFO = 6
SEVERITY_DEBUG = 7
_MAX_HOSTNAME = 255
_MAX_APPNAME = 48
_MAX_MSGID = 32
# ─── Formatter ────────────────────────────────────────────────────────────────
def _sd_escape(value: str) -> str:
"""Escape SD-PARAM-VALUE per RFC 5424 §6.3.3."""
return value.replace("\\", "\\\\").replace('"', '\\"').replace("]", "\\]")
def _sd_element(fields: dict[str, Any]) -> str:
if not fields:
return _NILVALUE
params = " ".join(f'{k}="{_sd_escape(str(v))}"' for k, v in fields.items())
return f"[{_SD_ID} {params}]"
def syslog_line(
service: str,
hostname: str,
event_type: str,
severity: int = SEVERITY_INFO,
timestamp: datetime | None = None,
msg: str | None = None,
**fields: Any,
) -> str:
"""
Return a single RFC 5424-compliant syslog line (no trailing newline).
Args:
service: APP-NAME (e.g. "http", "mysql")
hostname: HOSTNAME (decky node name)
event_type: MSGID (e.g. "request", "login_attempt")
severity: Syslog severity integer (default: INFO=6)
timestamp: UTC datetime; defaults to now
msg: Optional free-text MSG
**fields: Encoded as structured data params
"""
pri = f"<{_FACILITY_LOCAL0 * 8 + severity}>"
ts = (timestamp or datetime.now(timezone.utc)).isoformat()
host = (hostname or _NILVALUE)[:_MAX_HOSTNAME]
appname = (service or _NILVALUE)[:_MAX_APPNAME]
msgid = (event_type or _NILVALUE)[:_MAX_MSGID]
sd = _sd_element(fields)
message = f" {msg}" if msg else ""
return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}"
def write_syslog_file(line: str) -> None:
"""Emit a syslog line to stdout for Docker log capture."""
print(line, flush=True)
def forward_syslog(line: str, log_target: str) -> None:
"""No-op stub. TCP forwarding is now handled by rsyslog, not by service containers."""
pass

View File

@@ -0,0 +1,144 @@
#!/usr/bin/env python3
"""
Entrypoint wrapper for the Conpot ICS/SCADA honeypot.
Launches conpot as a child process and bridges its log output into the
DECNET structured syslog pipeline. Each line from conpot stdout/stderr
is classified and emitted as an RFC 5424 syslog line so the host-side
collector can ingest it alongside every other service.
Written to be compatible with Python 3.6 (the conpot base image version).
"""
from __future__ import print_function
import os
import re
import signal
import subprocess
import sys
from datetime import datetime, timezone
# ── RFC 5424 inline formatter (Python 3.6-compatible) ─────────────────────────
_FACILITY_LOCAL0 = 16
_SD_ID = "decnet@55555"
_NILVALUE = "-"
SEVERITY_INFO = 6
SEVERITY_WARNING = 4
SEVERITY_ERROR = 3
def _sd_escape(value):
return value.replace("\\", "\\\\").replace('"', '\\"').replace("]", "\\]")
def _syslog_line(event_type, severity=SEVERITY_INFO, **fields):
pri = "<{}>".format(_FACILITY_LOCAL0 * 8 + severity)
ts = datetime.now(timezone.utc).isoformat()
host = NODE_NAME[:255]
appname = "conpot"
msgid = event_type[:32]
if fields:
params = " ".join('{}="{}"'.format(k, _sd_escape(str(v))) for k, v in fields.items())
sd = "[{} {}]".format(_SD_ID, params)
else:
sd = _NILVALUE
return "{pri}1 {ts} {host} {appname} {nil} {msgid} {sd}".format(
pri=pri, ts=ts, host=host, appname=appname,
nil=_NILVALUE, msgid=msgid, sd=sd,
)
def _log(event_type, severity=SEVERITY_INFO, **fields):
print(_syslog_line(event_type, severity, **fields), flush=True)
# ── Config ────────────────────────────────────────────────────────────────────
NODE_NAME = os.environ.get("NODE_NAME", "conpot-node")
TEMPLATE = os.environ.get("CONPOT_TEMPLATE", "default")
_CONPOT_CMD = [
"/home/conpot/.local/bin/conpot",
"--template", TEMPLATE,
"--logfile", "/var/log/conpot/conpot.log",
"-f",
"--temp_dir", "/tmp",
]
# Grab the first routable IPv4 address from a log line
_IP_RE = re.compile(r"\b((?!127\.)(?!0\.)(?!255\.)\d{1,3}(?:\.\d{1,3}){3})\b")
_REQUEST_RE = re.compile(
r"request|recv|received|connect|session|query|command|"
r"modbus|snmp|http|s7comm|bacnet|enip",
re.IGNORECASE,
)
_ERROR_RE = re.compile(r"error|exception|traceback|critical|fail", re.IGNORECASE)
_WARN_RE = re.compile(r"warning|warn", re.IGNORECASE)
_STARTUP_RE = re.compile(
r"starting|started|listening|server|initializ|template|conpot",
re.IGNORECASE,
)
# ── Classifier ────────────────────────────────────────────────────────────────
def _classify(raw):
"""Return (event_type, severity, fields) for one conpot log line."""
fields = {}
m = _IP_RE.search(raw)
if m:
fields["src"] = m.group(1)
fields["msg"] = raw[:300]
if _ERROR_RE.search(raw):
return "error", SEVERITY_ERROR, fields
if _WARN_RE.search(raw):
return "warning", SEVERITY_WARNING, fields
if _REQUEST_RE.search(raw):
return "request", SEVERITY_INFO, fields
if _STARTUP_RE.search(raw):
return "startup", SEVERITY_INFO, fields
return "log", SEVERITY_INFO, fields
# ── Main ──────────────────────────────────────────────────────────────────────
def main():
_log("startup", msg="Conpot ICS honeypot starting (template={})".format(TEMPLATE))
proc = subprocess.Popen(
_CONPOT_CMD,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
bufsize=1,
universal_newlines=True,
)
def _forward(sig, _frame):
proc.send_signal(sig)
signal.signal(signal.SIGTERM, _forward)
signal.signal(signal.SIGINT, _forward)
try:
for raw_line in proc.stdout:
line = raw_line.rstrip()
if not line:
continue
event_type, severity, fields = _classify(line)
_log(event_type, severity, **fields)
finally:
proc.wait()
_log("shutdown", msg="Conpot ICS honeypot stopped")
sys.exit(proc.returncode)
if __name__ == "__main__":
main()

View File

@@ -7,30 +7,16 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
git authbind \ git authbind \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN useradd -m -s /bin/bash cowrie RUN useradd -r -s /bin/false -d /opt decnet \
&& apt-get update && apt-get install -y --no-install-recommends libcap2-bin \
&& rm -rf /var/lib/apt/lists/* \
&& (find /usr/bin/ -maxdepth 1 -name 'python3*' -type f -exec setcap 'cap_net_bind_service+eip' {} \; 2>/dev/null || true)
WORKDIR /home/cowrie
# pip install strips data/honeyfs — clone source so the fake filesystem is included
RUN git clone --depth 1 https://github.com/cowrie/cowrie.git /tmp/cowrie-src \
&& python3 -m venv cowrie-env \
&& cowrie-env/bin/pip install --no-cache-dir /tmp/cowrie-src jinja2 \
&& rm -rf /tmp/cowrie-src
# Authbind to bind port 22 as non-root
RUN touch /etc/authbind/byport/22 /etc/authbind/byport/2222 \
&& chmod 500 /etc/authbind/byport/22 /etc/authbind/byport/2222 \
&& chown cowrie /etc/authbind/byport/22 /etc/authbind/byport/2222
RUN mkdir -p /home/cowrie/cowrie-env/etc \
/home/cowrie/cowrie-env/var/log/cowrie \
/home/cowrie/cowrie-env/var/run \
&& chown -R cowrie /home/cowrie/cowrie-env/etc \
/home/cowrie/cowrie-env/var
COPY cowrie.cfg.j2 /home/cowrie/cowrie.cfg.j2
COPY entrypoint.sh /entrypoint.sh COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh RUN chmod +x /entrypoint.sh
USER cowrie HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
EXPOSE 22 2222 CMD kill -0 1 || exit 1
USER decnet
ENTRYPOINT ["/entrypoint.sh"] ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -0,0 +1,89 @@
#!/usr/bin/env python3
"""
Shared RFC 5424 syslog helper for DECNET service templates.
Services call syslog_line() to format an RFC 5424 message, then
write_syslog_file() to emit it to stdout — Docker captures it, and the
host-side collector streams it into the log file.
RFC 5424 structure:
<PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG
Facility: local0 (16), PEN for SD element ID: decnet@55555
"""
from datetime import datetime, timezone
from typing import Any
# ─── Constants ────────────────────────────────────────────────────────────────
_FACILITY_LOCAL0 = 16
_SD_ID = "decnet@55555"
_NILVALUE = "-"
SEVERITY_EMERG = 0
SEVERITY_ALERT = 1
SEVERITY_CRIT = 2
SEVERITY_ERROR = 3
SEVERITY_WARNING = 4
SEVERITY_NOTICE = 5
SEVERITY_INFO = 6
SEVERITY_DEBUG = 7
_MAX_HOSTNAME = 255
_MAX_APPNAME = 48
_MAX_MSGID = 32
# ─── Formatter ────────────────────────────────────────────────────────────────
def _sd_escape(value: str) -> str:
"""Escape SD-PARAM-VALUE per RFC 5424 §6.3.3."""
return value.replace("\\", "\\\\").replace('"', '\\"').replace("]", "\\]")
def _sd_element(fields: dict[str, Any]) -> str:
if not fields:
return _NILVALUE
params = " ".join(f'{k}="{_sd_escape(str(v))}"' for k, v in fields.items())
return f"[{_SD_ID} {params}]"
def syslog_line(
service: str,
hostname: str,
event_type: str,
severity: int = SEVERITY_INFO,
timestamp: datetime | None = None,
msg: str | None = None,
**fields: Any,
) -> str:
"""
Return a single RFC 5424-compliant syslog line (no trailing newline).
Args:
service: APP-NAME (e.g. "http", "mysql")
hostname: HOSTNAME (decky node name)
event_type: MSGID (e.g. "request", "login_attempt")
severity: Syslog severity integer (default: INFO=6)
timestamp: UTC datetime; defaults to now
msg: Optional free-text MSG
**fields: Encoded as structured data params
"""
pri = f"<{_FACILITY_LOCAL0 * 8 + severity}>"
ts = (timestamp or datetime.now(timezone.utc)).isoformat()
host = (hostname or _NILVALUE)[:_MAX_HOSTNAME]
appname = (service or _NILVALUE)[:_MAX_APPNAME]
msgid = (event_type or _NILVALUE)[:_MAX_MSGID]
sd = _sd_element(fields)
message = f" {msg}" if msg else ""
return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}"
def write_syslog_file(line: str) -> None:
"""Emit a syslog line to stdout for Docker log capture."""
print(line, flush=True)
def forward_syslog(line: str, log_target: str) -> None:
"""No-op stub. TCP forwarding is now handled by rsyslog, not by service containers."""
pass

View File

@@ -0,0 +1,62 @@
root:x:0:
daemon:x:1:
bin:x:2:
sys:x:3:
adm:x:4:syslog,admin
tty:x:5:
disk:x:6:
lp:x:7:
mail:x:8:
news:x:9:
uucp:x:10:
man:x:12:
proxy:x:13:
kmem:x:15:
dialout:x:20:
fax:x:21:
voice:x:22:
cdrom:x:24:admin
floppy:x:25:
tape:x:26:
sudo:x:27:admin
audio:x:29:
dip:x:30:admin
www-data:x:33:
backup:x:34:
operator:x:37:
list:x:38:
irc:x:39:
src:x:40:
gnats:x:41:
shadow:x:42:
utmp:x:43:
video:x:44:
sasl:x:45:
plugdev:x:46:admin
staff:x:50:
games:x:60:
users:x:100:
nogroup:x:65534:
systemd-journal:x:101:
systemd-network:x:102:
systemd-resolve:x:103:
crontab:x:104:
messagebus:x:105:
systemd-timesync:x:106:
input:x:107:
sgx:x:108:
kvm:x:109:
render:x:110:
syslog:x:110:
tss:x:111:
uuidd:x:112:
tcpdump:x:113:
ssl-cert:x:114:
landscape:x:115:
fwupd-refresh:x:116:
usbmux:x:46:
lxd:x:117:admin
systemd-coredump:x:999:
mysql:x:119:
netdev:x:120:admin
admin:x:1000:

View File

@@ -0,0 +1 @@
NODE_NAME

View File

@@ -0,0 +1,5 @@
127.0.0.1 localhost
127.0.1.1 NODE_NAME
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

View File

@@ -0,0 +1,2 @@
Ubuntu 22.04.3 LTS \n \l

View File

@@ -0,0 +1 @@
Ubuntu 22.04.3 LTS

View File

@@ -0,0 +1,26 @@
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Mon Jan 15 09:12:44 UTC 2024
System load: 0.08 Processes: 142
Usage of /: 34.2% of 49.10GB Users logged in: 0
Memory usage: 22% IPv4 address for eth0: 10.0.1.5
Swap usage: 0%
* Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s
just raised the bar for K8s security.
https://ubuntu.com/engage/secure-kubernetes-at-the-edge
Expanded Security Maintenance for Applications is not enabled.
0 updates can be applied immediately.
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
Last login: Sun Jan 14 23:45:01 2024 from 10.0.0.1

View File

@@ -0,0 +1,12 @@
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

View File

@@ -0,0 +1,36 @@
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
systemd-network:x:100:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:101:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
messagebus:x:102:105::/nonexistent:/usr/sbin/nologin
systemd-timesync:x:103:106:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin
syslog:x:104:110::/home/syslog:/usr/sbin/nologin
_apt:x:105:65534::/nonexistent:/usr/sbin/nologin
tss:x:106:111:TPM software stack,,,:/var/lib/tpm:/bin/false
uuidd:x:107:112::/run/uuidd:/usr/sbin/nologin
tcpdump:x:108:113::/nonexistent:/usr/sbin/nologin
landscape:x:109:115::/var/lib/landscape:/usr/sbin/nologin
pollinate:x:110:1::/var/cache/pollinate:/bin/false
fwupd-refresh:x:111:116:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin
usbmux:x:112:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin
sshd:x:113:65534::/run/sshd:/usr/sbin/nologin
systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin
lxd:x:998:100::/var/snap/lxd/common/lxd:/bin/false
mysql:x:114:119:MySQL Server,,,:/nonexistent:/bin/false
admin:x:1000:1000:Admin User,,,:/home/admin:/bin/bash

View File

@@ -0,0 +1,4 @@
# This file is managed by man:systemd-resolved(8). Do not edit.
nameserver 8.8.8.8
nameserver 8.8.4.4
search company.internal

View File

@@ -0,0 +1,36 @@
root:$6$rounds=4096$randomsalt$hashed_root_password:19000:0:99999:7:::
daemon:*:19000:0:99999:7:::
bin:*:19000:0:99999:7:::
sys:*:19000:0:99999:7:::
sync:*:19000:0:99999:7:::
games:*:19000:0:99999:7:::
man:*:19000:0:99999:7:::
lp:*:19000:0:99999:7:::
mail:*:19000:0:99999:7:::
news:*:19000:0:99999:7:::
uucp:*:19000:0:99999:7:::
proxy:*:19000:0:99999:7:::
www-data:*:19000:0:99999:7:::
backup:*:19000:0:99999:7:::
list:*:19000:0:99999:7:::
irc:*:19000:0:99999:7:::
gnats:*:19000:0:99999:7:::
nobody:*:19000:0:99999:7:::
systemd-network:*:19000:0:99999:7:::
systemd-resolve:*:19000:0:99999:7:::
messagebus:*:19000:0:99999:7:::
systemd-timesync:*:19000:0:99999:7:::
syslog:*:19000:0:99999:7:::
_apt:*:19000:0:99999:7:::
tss:*:19000:0:99999:7:::
uuidd:*:19000:0:99999:7:::
tcpdump:*:19000:0:99999:7:::
landscape:*:19000:0:99999:7:::
pollinate:*:19000:0:99999:7:::
fwupd-refresh:*:19000:0:99999:7:::
usbmux:*:19000:0:99999:7:::
sshd:*:19000:0:99999:7:::
systemd-coredump:!!:19000::::::
lxd:!:19000::::::
mysql:!:19000:0:99999:7:::
admin:$6$rounds=4096$xyz123$hashed_admin_password:19000:0:99999:7:::

View File

@@ -0,0 +1,14 @@
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = us-east-1
[production]
aws_access_key_id = AKIAI44QH8DHBEXAMPLE
aws_secret_access_key = je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
region = us-east-1
[backup-role]
aws_access_key_id = AKIAIOSFODNN7BACKUP1
aws_secret_access_key = 9drTJvcXLB89EXAMPLEKEY/bPxRfiCYBACKUPKEY
region = eu-west-2

View File

@@ -0,0 +1,33 @@
ls -la
cd /var/www/html
git status
git pull origin main
sudo systemctl restart nginx
sudo systemctl status nginx
df -h
free -m
top
ps aux | grep nginx
aws s3 ls
aws s3 ls s3://company-prod-backups
aws s3 cp /var/www/html/backup.tar.gz s3://company-prod-backups/
aws ec2 describe-instances --region us-east-1
kubectl get pods -n production
kubectl get services -n production
kubectl describe pod api-deployment-7d4b9c5f6-xk2pz -n production
docker ps
docker images
docker-compose up -d
mysql -u admin -pSup3rS3cr3t! -h 10.0.1.5 production
cat /etc/mysql/my.cnf
tail -f /var/log/nginx/access.log
tail -f /var/log/auth.log
ssh root@10.0.1.10
scp admin@10.0.1.20:/home/admin/.aws/credentials /tmp/
cat ~/.aws/credentials
vim ~/.aws/credentials
sudo crontab -l
ls /opt/app/
cd /opt/app && npm run build
git log --oneline -20
history

View File

@@ -0,0 +1,2 @@
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7+xamplekeyforadminuser+xamplekeyforadminuser+xamplekeyforadminuser+xamplekeyforadminuser+xamplekeyforadminuser+xamplekeyforadminuser+xamplekeyforadminuser+xamplekeyforadminuser+xamplekeyforadminuser+xamplekeyforadminuser+xamplekeyforadminuser+xamplekeyforadminuser+xamplekey admin@workstation
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbackupkeyfordeploymentpipeline+backupkeyfordeploymentpipeline+backupkeyfordeploymentpipeline+backupkeyfordeploymentpipeline+backupkeyfordeploymentpipeline+backupkeyfordeploymentpipeline+backupkeyfordeploymentpipeline+backupkeyfordeploymentpipeline+backupkeyfordeploymentpipeline+backupkeyfordeploymentpipeline deploy@ci-runner

View File

@@ -0,0 +1,22 @@
whoami
id
uname -a
cat /etc/passwd
cat /etc/shadow
ls /home
ls /home/admin
cat /home/admin/.bash_history
cat /home/admin/.aws/credentials
find / -name "*.pem" 2>/dev/null
find / -name "id_rsa" 2>/dev/null
find / -name "*.key" 2>/dev/null
netstat -tunlp
ss -tunlp
iptables -L
cat /etc/crontab
crontab -l
ps aux
systemctl list-units
cat /etc/mysql/my.cnf
mysql -u root -p
history -c

View File

@@ -0,0 +1,12 @@
Jan 14 23:31:04 NODE_NAME sshd[1832]: Accepted publickey for admin from 10.0.0.1 port 54321 ssh2: RSA SHA256:xAmPlEkEyHaSh1234567890abcdefghijklmnop
Jan 14 23:31:04 NODE_NAME sshd[1832]: pam_unix(sshd:session): session opened for user admin by (uid=0)
Jan 14 23:31:46 NODE_NAME sudo[1901]: admin : TTY=pts/0 ; PWD=/home/admin ; USER=root ; COMMAND=/usr/bin/systemctl restart nginx
Jan 14 23:31:46 NODE_NAME sudo[1901]: pam_unix(sudo:session): session opened for user root by admin(uid=0)
Jan 14 23:31:47 NODE_NAME sudo[1901]: pam_unix(sudo:session): session closed for user root
Jan 14 23:45:01 NODE_NAME sshd[1832]: pam_unix(sshd:session): session closed for user admin
Jan 15 00:02:14 NODE_NAME sshd[2104]: Failed password for invalid user oracle from 185.220.101.47 port 38291 ssh2
Jan 15 00:02:16 NODE_NAME sshd[2106]: Failed password for invalid user postgres from 185.220.101.47 port 38295 ssh2
Jan 15 00:02:19 NODE_NAME sshd[2108]: Failed password for root from 185.220.101.47 port 38301 ssh2
Jan 15 00:02:19 NODE_NAME sshd[2108]: error: maximum authentication attempts exceeded for root from 185.220.101.47 port 38301 ssh2 [preauth]
Jan 15 09:12:44 NODE_NAME sshd[2891]: Accepted password for admin from 10.0.0.5 port 51243 ssh2
Jan 15 09:12:44 NODE_NAME sshd[2891]: pam_unix(sshd:session): session opened for user admin by (uid=0)

View File

@@ -2,10 +2,9 @@
""" """
Shared RFC 5424 syslog helper for DECNET service templates. Shared RFC 5424 syslog helper for DECNET service templates.
Provides two functions consumed by every service's server.py: Services call syslog_line() to format an RFC 5424 message, then
- syslog_line(service, hostname, event_type, severity, **fields) -> str write_syslog_file() to emit it to stdout — Docker captures it, and the
- write_syslog_file(line: str) -> None host-side collector streams it into the log file.
- forward_syslog(line: str, log_target: str) -> None
RFC 5424 structure: RFC 5424 structure:
<PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG <PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG
@@ -13,12 +12,7 @@ RFC 5424 structure:
Facility: local0 (16), PEN for SD element ID: decnet@55555 Facility: local0 (16), PEN for SD element ID: decnet@55555
""" """
import logging
import logging.handlers
import os
import socket
from datetime import datetime, timezone from datetime import datetime, timezone
from pathlib import Path
from typing import Any from typing import Any
# ─── Constants ──────────────────────────────────────────────────────────────── # ─── Constants ────────────────────────────────────────────────────────────────
@@ -40,11 +34,6 @@ _MAX_HOSTNAME = 255
_MAX_APPNAME = 48 _MAX_APPNAME = 48
_MAX_MSGID = 32 _MAX_MSGID = 32
_LOG_FILE_ENV = "DECNET_LOG_FILE"
_DEFAULT_LOG_FILE = "/var/log/decnet/decnet.log"
_MAX_BYTES = 10 * 1024 * 1024 # 10 MB
_BACKUP_COUNT = 5
# ─── Formatter ──────────────────────────────────────────────────────────────── # ─── Formatter ────────────────────────────────────────────────────────────────
def _sd_escape(value: str) -> str: def _sd_escape(value: str) -> str:
@@ -90,152 +79,11 @@ def syslog_line(
return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}" return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}"
# ─── File handler ─────────────────────────────────────────────────────────────
_file_logger: logging.Logger | None = None
def _get_file_logger() -> logging.Logger:
global _file_logger
if _file_logger is not None:
return _file_logger
log_path = Path(os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE))
try:
log_path.parent.mkdir(parents=True, exist_ok=True)
handler = logging.FileHandler(
log_path,
encoding="utf-8",
)
except OSError:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(message)s"))
_file_logger = logging.getLogger("decnet.syslog")
_file_logger.setLevel(logging.DEBUG)
_file_logger.propagate = False
_file_logger.addHandler(handler)
return _file_logger
_json_logger: logging.Logger | None = None
def _get_json_logger() -> logging.Logger:
global _json_logger
if _json_logger is not None:
return _json_logger
log_path_str = os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE)
json_path = Path(log_path_str).with_suffix(".json")
try:
json_path.parent.mkdir(parents=True, exist_ok=True)
handler = logging.FileHandler(
json_path,
encoding="utf-8",
)
except OSError:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(message)s"))
_json_logger = logging.getLogger("decnet.json")
_json_logger.setLevel(logging.DEBUG)
_json_logger.propagate = False
_json_logger.addHandler(handler)
return _json_logger
def write_syslog_file(line: str) -> None: def write_syslog_file(line: str) -> None:
"""Append a syslog line to the rotating log file.""" """Emit a syslog line to stdout for Docker log capture."""
try: print(line, flush=True)
_get_file_logger().info(line)
# Also parse and write JSON log
import json
import re
from datetime import datetime
from typing import Optional, Any
_RFC5424_RE: re.Pattern = re.compile(
r"^<\d+>1 "
r"(\S+) " # 1: TIMESTAMP
r"(\S+) " # 2: HOSTNAME (decky name)
r"(\S+) " # 3: APP-NAME (service)
r"- " # PROCID always NILVALUE
r"(\S+) " # 4: MSGID (event_type)
r"(.+)$", # 5: SD element + optional MSG
)
_SD_BLOCK_RE: re.Pattern = re.compile(r'\[decnet@55555\s+(.*?)\]', re.DOTALL)
_PARAM_RE: re.Pattern = re.compile(r'(\w+)="((?:[^"\\]|\\.)*)"')
_IP_FIELDS: tuple[str, ...] = ("src_ip", "src", "client_ip", "remote_ip", "ip")
_m: Optional[re.Match] = _RFC5424_RE.match(line)
if _m:
_ts_raw: str
_decky: str
_service: str
_event_type: str
_sd_rest: str
_ts_raw, _decky, _service, _event_type, _sd_rest = _m.groups()
_fields: dict[str, str] = {}
_msg: str = ""
if _sd_rest.startswith("-"):
_msg = _sd_rest[1:].lstrip()
elif _sd_rest.startswith("["):
_block: Optional[re.Match] = _SD_BLOCK_RE.search(_sd_rest)
if _block:
for _k, _v in _PARAM_RE.findall(_block.group(1)):
_fields[_k] = _v.replace('\\"', '"').replace("\\\\", "\\").replace("\\]", "]")
# extract msg after the block
_msg_match: Optional[re.Match] = re.search(r'\]\s+(.+)$', _sd_rest)
if _msg_match:
_msg = _msg_match.group(1).strip()
else:
_msg = _sd_rest
_attacker_ip: str = "Unknown"
for _fname in _IP_FIELDS:
if _fname in _fields:
_attacker_ip = _fields[_fname]
break
# Parse timestamp to normalize it
_ts_formatted: str
try:
_ts_formatted = datetime.fromisoformat(_ts_raw).strftime("%Y-%m-%d %H:%M:%S")
except ValueError:
_ts_formatted = _ts_raw
_payload: dict[str, Any] = {
"timestamp": _ts_formatted,
"decky": _decky,
"service": _service,
"event_type": _event_type,
"attacker_ip": _attacker_ip,
"fields": json.dumps(_fields),
"msg": _msg,
"raw_line": line
}
_get_json_logger().info(json.dumps(_payload))
except Exception:
pass
# ─── TCP forwarding ───────────────────────────────────────────────────────────
def forward_syslog(line: str, log_target: str) -> None: def forward_syslog(line: str, log_target: str) -> None:
"""Forward a syslog line over TCP to log_target (ip:port).""" """No-op stub. TCP forwarding is now handled by rsyslog, not by service containers."""
if not log_target: pass
return
try:
host, port = log_target.rsplit(":", 1)
with socket.create_connection((host, int(port)), timeout=3) as s:
s.sendall((line + "\n").encode())
except Exception:
pass

View File

@@ -14,4 +14,13 @@ COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh RUN chmod +x /entrypoint.sh
EXPOSE 2375 2376 EXPOSE 2375 2376
RUN useradd -r -s /bin/false -d /opt decnet \
&& apt-get update && apt-get install -y --no-install-recommends libcap2-bin \
&& rm -rf /var/lib/apt/lists/* \
&& (find /usr/bin/ -maxdepth 1 -name 'python3*' -type f -exec setcap 'cap_net_bind_service+eip' {} \; 2>/dev/null || true)
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD kill -0 1 || exit 1
USER decnet
ENTRYPOINT ["/entrypoint.sh"] ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -2,10 +2,9 @@
""" """
Shared RFC 5424 syslog helper for DECNET service templates. Shared RFC 5424 syslog helper for DECNET service templates.
Provides two functions consumed by every service's server.py: Services call syslog_line() to format an RFC 5424 message, then
- syslog_line(service, hostname, event_type, severity, **fields) -> str write_syslog_file() to emit it to stdout — Docker captures it, and the
- write_syslog_file(line: str) -> None host-side collector streams it into the log file.
- forward_syslog(line: str, log_target: str) -> None
RFC 5424 structure: RFC 5424 structure:
<PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG <PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG
@@ -13,12 +12,7 @@ RFC 5424 structure:
Facility: local0 (16), PEN for SD element ID: decnet@55555 Facility: local0 (16), PEN for SD element ID: decnet@55555
""" """
import logging
import logging.handlers
import os
import socket
from datetime import datetime, timezone from datetime import datetime, timezone
from pathlib import Path
from typing import Any from typing import Any
# ─── Constants ──────────────────────────────────────────────────────────────── # ─── Constants ────────────────────────────────────────────────────────────────
@@ -40,11 +34,6 @@ _MAX_HOSTNAME = 255
_MAX_APPNAME = 48 _MAX_APPNAME = 48
_MAX_MSGID = 32 _MAX_MSGID = 32
_LOG_FILE_ENV = "DECNET_LOG_FILE"
_DEFAULT_LOG_FILE = "/var/log/decnet/decnet.log"
_MAX_BYTES = 10 * 1024 * 1024 # 10 MB
_BACKUP_COUNT = 5
# ─── Formatter ──────────────────────────────────────────────────────────────── # ─── Formatter ────────────────────────────────────────────────────────────────
def _sd_escape(value: str) -> str: def _sd_escape(value: str) -> str:
@@ -90,156 +79,11 @@ def syslog_line(
return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}" return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}"
# ─── File handler ─────────────────────────────────────────────────────────────
_file_logger: logging.Logger | None = None
def _get_file_logger() -> logging.Logger:
global _file_logger
if _file_logger is not None:
return _file_logger
log_path = Path(os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE))
try:
log_path.parent.mkdir(parents=True, exist_ok=True)
handler = logging.handlers.RotatingFileHandler(
log_path,
maxBytes=_MAX_BYTES,
backupCount=_BACKUP_COUNT,
encoding="utf-8",
)
except OSError:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(message)s"))
_file_logger = logging.getLogger("decnet.syslog")
_file_logger.setLevel(logging.DEBUG)
_file_logger.propagate = False
_file_logger.addHandler(handler)
return _file_logger
_json_logger: logging.Logger | None = None
def _get_json_logger() -> logging.Logger:
global _json_logger
if _json_logger is not None:
return _json_logger
log_path_str = os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE)
json_path = Path(log_path_str).with_suffix(".json")
try:
json_path.parent.mkdir(parents=True, exist_ok=True)
handler = logging.handlers.RotatingFileHandler(
json_path,
maxBytes=_MAX_BYTES,
backupCount=_BACKUP_COUNT,
encoding="utf-8",
)
except OSError:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(message)s"))
_json_logger = logging.getLogger("decnet.json")
_json_logger.setLevel(logging.DEBUG)
_json_logger.propagate = False
_json_logger.addHandler(handler)
return _json_logger
def write_syslog_file(line: str) -> None: def write_syslog_file(line: str) -> None:
"""Append a syslog line to the rotating log file.""" """Emit a syslog line to stdout for Docker log capture."""
try: print(line, flush=True)
_get_file_logger().info(line)
# Also parse and write JSON log
import json
import re
from datetime import datetime
from typing import Optional, Any
_RFC5424_RE: re.Pattern = re.compile(
r"^<\d+>1 "
r"(\S+) " # 1: TIMESTAMP
r"(\S+) " # 2: HOSTNAME (decky name)
r"(\S+) " # 3: APP-NAME (service)
r"- " # PROCID always NILVALUE
r"(\S+) " # 4: MSGID (event_type)
r"(.+)$", # 5: SD element + optional MSG
)
_SD_BLOCK_RE: re.Pattern = re.compile(r'\[decnet@55555\s+(.*?)\]', re.DOTALL)
_PARAM_RE: re.Pattern = re.compile(r'(\w+)="((?:[^"\\]|\\.)*)"')
_IP_FIELDS: tuple[str, ...] = ("src_ip", "src", "client_ip", "remote_ip", "ip")
_m: Optional[re.Match] = _RFC5424_RE.match(line)
if _m:
_ts_raw: str
_decky: str
_service: str
_event_type: str
_sd_rest: str
_ts_raw, _decky, _service, _event_type, _sd_rest = _m.groups()
_fields: dict[str, str] = {}
_msg: str = ""
if _sd_rest.startswith("-"):
_msg = _sd_rest[1:].lstrip()
elif _sd_rest.startswith("["):
_block: Optional[re.Match] = _SD_BLOCK_RE.search(_sd_rest)
if _block:
for _k, _v in _PARAM_RE.findall(_block.group(1)):
_fields[_k] = _v.replace('\\"', '"').replace("\\\\", "\\").replace("\\]", "]")
# extract msg after the block
_msg_match: Optional[re.Match] = re.search(r'\]\s+(.+)$', _sd_rest)
if _msg_match:
_msg = _msg_match.group(1).strip()
else:
_msg = _sd_rest
_attacker_ip: str = "Unknown"
for _fname in _IP_FIELDS:
if _fname in _fields:
_attacker_ip = _fields[_fname]
break
# Parse timestamp to normalize it
_ts_formatted: str
try:
_ts_formatted = datetime.fromisoformat(_ts_raw).strftime("%Y-%m-%d %H:%M:%S")
except ValueError:
_ts_formatted = _ts_raw
_payload: dict[str, Any] = {
"timestamp": _ts_formatted,
"decky": _decky,
"service": _service,
"event_type": _event_type,
"attacker_ip": _attacker_ip,
"fields": json.dumps(_fields),
"msg": _msg,
"raw_line": line
}
_get_json_logger().info(json.dumps(_payload))
except Exception:
pass
# ─── TCP forwarding ───────────────────────────────────────────────────────────
def forward_syslog(line: str, log_target: str) -> None: def forward_syslog(line: str, log_target: str) -> None:
"""Forward a syslog line over TCP to log_target (ip:port).""" """No-op stub. TCP forwarding is now handled by rsyslog, not by service containers."""
if not log_target: pass
return
try:
host, port = log_target.rsplit(":", 1)
with socket.create_connection((host, int(port)), timeout=3) as s:
s.sendall((line + "\n").encode())
except Exception:
pass

View File

@@ -53,7 +53,7 @@ _CONTAINERS = [
"Image": "nginx:latest", "Image": "nginx:latest",
"State": "running", "State": "running",
"Status": "Up 3 days", "Status": "Up 3 days",
"Ports": [{"IP": "0.0.0.0", "PrivatePort": 80, "PublicPort": 8080, "Type": "tcp"}], "Ports": [{"IP": "0.0.0.0", "PrivatePort": 80, "PublicPort": 8080, "Type": "tcp"}], # nosec B104
} }
] ]
@@ -114,4 +114,4 @@ def catch_all(path):
if __name__ == "__main__": if __name__ == "__main__":
_log("startup", msg=f"Docker API server starting as {NODE_NAME}") _log("startup", msg=f"Docker API server starting as {NODE_NAME}")
app.run(host="0.0.0.0", port=2375, debug=False) app.run(host="0.0.0.0", port=2375, debug=False) # nosec B104

View File

@@ -11,4 +11,13 @@ COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh RUN chmod +x /entrypoint.sh
EXPOSE 9200 EXPOSE 9200
RUN useradd -r -s /bin/false -d /opt decnet \
&& apt-get update && apt-get install -y --no-install-recommends libcap2-bin \
&& rm -rf /var/lib/apt/lists/* \
&& (find /usr/bin/ -maxdepth 1 -name 'python3*' -type f -exec setcap 'cap_net_bind_service+eip' {} \; 2>/dev/null || true)
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD kill -0 1 || exit 1
USER decnet
ENTRYPOINT ["/entrypoint.sh"] ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -2,10 +2,9 @@
""" """
Shared RFC 5424 syslog helper for DECNET service templates. Shared RFC 5424 syslog helper for DECNET service templates.
Provides two functions consumed by every service's server.py: Services call syslog_line() to format an RFC 5424 message, then
- syslog_line(service, hostname, event_type, severity, **fields) -> str write_syslog_file() to emit it to stdout — Docker captures it, and the
- write_syslog_file(line: str) -> None host-side collector streams it into the log file.
- forward_syslog(line: str, log_target: str) -> None
RFC 5424 structure: RFC 5424 structure:
<PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG <PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG
@@ -13,12 +12,7 @@ RFC 5424 structure:
Facility: local0 (16), PEN for SD element ID: decnet@55555 Facility: local0 (16), PEN for SD element ID: decnet@55555
""" """
import logging
import logging.handlers
import os
import socket
from datetime import datetime, timezone from datetime import datetime, timezone
from pathlib import Path
from typing import Any from typing import Any
# ─── Constants ──────────────────────────────────────────────────────────────── # ─── Constants ────────────────────────────────────────────────────────────────
@@ -40,11 +34,6 @@ _MAX_HOSTNAME = 255
_MAX_APPNAME = 48 _MAX_APPNAME = 48
_MAX_MSGID = 32 _MAX_MSGID = 32
_LOG_FILE_ENV = "DECNET_LOG_FILE"
_DEFAULT_LOG_FILE = "/var/log/decnet/decnet.log"
_MAX_BYTES = 10 * 1024 * 1024 # 10 MB
_BACKUP_COUNT = 5
# ─── Formatter ──────────────────────────────────────────────────────────────── # ─── Formatter ────────────────────────────────────────────────────────────────
def _sd_escape(value: str) -> str: def _sd_escape(value: str) -> str:
@@ -90,156 +79,11 @@ def syslog_line(
return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}" return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}"
# ─── File handler ─────────────────────────────────────────────────────────────
_file_logger: logging.Logger | None = None
def _get_file_logger() -> logging.Logger:
global _file_logger
if _file_logger is not None:
return _file_logger
log_path = Path(os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE))
try:
log_path.parent.mkdir(parents=True, exist_ok=True)
handler = logging.handlers.RotatingFileHandler(
log_path,
maxBytes=_MAX_BYTES,
backupCount=_BACKUP_COUNT,
encoding="utf-8",
)
except OSError:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(message)s"))
_file_logger = logging.getLogger("decnet.syslog")
_file_logger.setLevel(logging.DEBUG)
_file_logger.propagate = False
_file_logger.addHandler(handler)
return _file_logger
_json_logger: logging.Logger | None = None
def _get_json_logger() -> logging.Logger:
global _json_logger
if _json_logger is not None:
return _json_logger
log_path_str = os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE)
json_path = Path(log_path_str).with_suffix(".json")
try:
json_path.parent.mkdir(parents=True, exist_ok=True)
handler = logging.handlers.RotatingFileHandler(
json_path,
maxBytes=_MAX_BYTES,
backupCount=_BACKUP_COUNT,
encoding="utf-8",
)
except OSError:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(message)s"))
_json_logger = logging.getLogger("decnet.json")
_json_logger.setLevel(logging.DEBUG)
_json_logger.propagate = False
_json_logger.addHandler(handler)
return _json_logger
def write_syslog_file(line: str) -> None: def write_syslog_file(line: str) -> None:
"""Append a syslog line to the rotating log file.""" """Emit a syslog line to stdout for Docker log capture."""
try: print(line, flush=True)
_get_file_logger().info(line)
# Also parse and write JSON log
import json
import re
from datetime import datetime
from typing import Optional, Any
_RFC5424_RE: re.Pattern = re.compile(
r"^<\d+>1 "
r"(\S+) " # 1: TIMESTAMP
r"(\S+) " # 2: HOSTNAME (decky name)
r"(\S+) " # 3: APP-NAME (service)
r"- " # PROCID always NILVALUE
r"(\S+) " # 4: MSGID (event_type)
r"(.+)$", # 5: SD element + optional MSG
)
_SD_BLOCK_RE: re.Pattern = re.compile(r'\[decnet@55555\s+(.*?)\]', re.DOTALL)
_PARAM_RE: re.Pattern = re.compile(r'(\w+)="((?:[^"\\]|\\.)*)"')
_IP_FIELDS: tuple[str, ...] = ("src_ip", "src", "client_ip", "remote_ip", "ip")
_m: Optional[re.Match] = _RFC5424_RE.match(line)
if _m:
_ts_raw: str
_decky: str
_service: str
_event_type: str
_sd_rest: str
_ts_raw, _decky, _service, _event_type, _sd_rest = _m.groups()
_fields: dict[str, str] = {}
_msg: str = ""
if _sd_rest.startswith("-"):
_msg = _sd_rest[1:].lstrip()
elif _sd_rest.startswith("["):
_block: Optional[re.Match] = _SD_BLOCK_RE.search(_sd_rest)
if _block:
for _k, _v in _PARAM_RE.findall(_block.group(1)):
_fields[_k] = _v.replace('\\"', '"').replace("\\\\", "\\").replace("\\]", "]")
# extract msg after the block
_msg_match: Optional[re.Match] = re.search(r'\]\s+(.+)$', _sd_rest)
if _msg_match:
_msg = _msg_match.group(1).strip()
else:
_msg = _sd_rest
_attacker_ip: str = "Unknown"
for _fname in _IP_FIELDS:
if _fname in _fields:
_attacker_ip = _fields[_fname]
break
# Parse timestamp to normalize it
_ts_formatted: str
try:
_ts_formatted = datetime.fromisoformat(_ts_raw).strftime("%Y-%m-%d %H:%M:%S")
except ValueError:
_ts_formatted = _ts_raw
_payload: dict[str, Any] = {
"timestamp": _ts_formatted,
"decky": _decky,
"service": _service,
"event_type": _event_type,
"attacker_ip": _attacker_ip,
"fields": json.dumps(_fields),
"msg": _msg,
"raw_line": line
}
_get_json_logger().info(json.dumps(_payload))
except Exception:
pass
# ─── TCP forwarding ───────────────────────────────────────────────────────────
def forward_syslog(line: str, log_target: str) -> None: def forward_syslog(line: str, log_target: str) -> None:
"""Forward a syslog line over TCP to log_target (ip:port).""" """No-op stub. TCP forwarding is now handled by rsyslog, not by service containers."""
if not log_target: pass
return
try:
host, port = log_target.rsplit(":", 1)
with socket.create_connection((host, int(port)), timeout=3) as s:
s.sendall((line + "\n").encode())
except Exception:
pass

View File

@@ -120,5 +120,5 @@ class ESHandler(BaseHTTPRequestHandler):
if __name__ == "__main__": if __name__ == "__main__":
_log("startup", msg=f"Elasticsearch server starting as {NODE_NAME}") _log("startup", msg=f"Elasticsearch server starting as {NODE_NAME}")
server = HTTPServer(("0.0.0.0", 9200), ESHandler) server = HTTPServer(("0.0.0.0", 9200), ESHandler) # nosec B104
server.serve_forever() server.serve_forever()

View File

@@ -14,4 +14,13 @@ COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh RUN chmod +x /entrypoint.sh
EXPOSE 21 EXPOSE 21
RUN useradd -r -s /bin/false -d /opt decnet \
&& apt-get update && apt-get install -y --no-install-recommends libcap2-bin \
&& rm -rf /var/lib/apt/lists/* \
&& (find /usr/bin/ -maxdepth 1 -name 'python3*' -type f -exec setcap 'cap_net_bind_service+eip' {} \; 2>/dev/null || true)
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD kill -0 1 || exit 1
USER decnet
ENTRYPOINT ["/entrypoint.sh"] ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -2,10 +2,9 @@
""" """
Shared RFC 5424 syslog helper for DECNET service templates. Shared RFC 5424 syslog helper for DECNET service templates.
Provides two functions consumed by every service's server.py: Services call syslog_line() to format an RFC 5424 message, then
- syslog_line(service, hostname, event_type, severity, **fields) -> str write_syslog_file() to emit it to stdout — Docker captures it, and the
- write_syslog_file(line: str) -> None host-side collector streams it into the log file.
- forward_syslog(line: str, log_target: str) -> None
RFC 5424 structure: RFC 5424 structure:
<PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG <PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ELEMENT] MSG
@@ -13,12 +12,7 @@ RFC 5424 structure:
Facility: local0 (16), PEN for SD element ID: decnet@55555 Facility: local0 (16), PEN for SD element ID: decnet@55555
""" """
import logging
import logging.handlers
import os
import socket
from datetime import datetime, timezone from datetime import datetime, timezone
from pathlib import Path
from typing import Any from typing import Any
# ─── Constants ──────────────────────────────────────────────────────────────── # ─── Constants ────────────────────────────────────────────────────────────────
@@ -40,11 +34,6 @@ _MAX_HOSTNAME = 255
_MAX_APPNAME = 48 _MAX_APPNAME = 48
_MAX_MSGID = 32 _MAX_MSGID = 32
_LOG_FILE_ENV = "DECNET_LOG_FILE"
_DEFAULT_LOG_FILE = "/var/log/decnet/decnet.log"
_MAX_BYTES = 10 * 1024 * 1024 # 10 MB
_BACKUP_COUNT = 5
# ─── Formatter ──────────────────────────────────────────────────────────────── # ─── Formatter ────────────────────────────────────────────────────────────────
def _sd_escape(value: str) -> str: def _sd_escape(value: str) -> str:
@@ -90,156 +79,11 @@ def syslog_line(
return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}" return f"{pri}1 {ts} {host} {appname} {_NILVALUE} {msgid} {sd}{message}"
# ─── File handler ─────────────────────────────────────────────────────────────
_file_logger: logging.Logger | None = None
def _get_file_logger() -> logging.Logger:
global _file_logger
if _file_logger is not None:
return _file_logger
log_path = Path(os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE))
try:
log_path.parent.mkdir(parents=True, exist_ok=True)
handler = logging.handlers.RotatingFileHandler(
log_path,
maxBytes=_MAX_BYTES,
backupCount=_BACKUP_COUNT,
encoding="utf-8",
)
except OSError:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(message)s"))
_file_logger = logging.getLogger("decnet.syslog")
_file_logger.setLevel(logging.DEBUG)
_file_logger.propagate = False
_file_logger.addHandler(handler)
return _file_logger
_json_logger: logging.Logger | None = None
def _get_json_logger() -> logging.Logger:
global _json_logger
if _json_logger is not None:
return _json_logger
log_path_str = os.environ.get(_LOG_FILE_ENV, _DEFAULT_LOG_FILE)
json_path = Path(log_path_str).with_suffix(".json")
try:
json_path.parent.mkdir(parents=True, exist_ok=True)
handler = logging.handlers.RotatingFileHandler(
json_path,
maxBytes=_MAX_BYTES,
backupCount=_BACKUP_COUNT,
encoding="utf-8",
)
except OSError:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(message)s"))
_json_logger = logging.getLogger("decnet.json")
_json_logger.setLevel(logging.DEBUG)
_json_logger.propagate = False
_json_logger.addHandler(handler)
return _json_logger
def write_syslog_file(line: str) -> None: def write_syslog_file(line: str) -> None:
"""Append a syslog line to the rotating log file.""" """Emit a syslog line to stdout for Docker log capture."""
try: print(line, flush=True)
_get_file_logger().info(line)
# Also parse and write JSON log
import json
import re
from datetime import datetime
from typing import Optional, Any
_RFC5424_RE: re.Pattern = re.compile(
r"^<\d+>1 "
r"(\S+) " # 1: TIMESTAMP
r"(\S+) " # 2: HOSTNAME (decky name)
r"(\S+) " # 3: APP-NAME (service)
r"- " # PROCID always NILVALUE
r"(\S+) " # 4: MSGID (event_type)
r"(.+)$", # 5: SD element + optional MSG
)
_SD_BLOCK_RE: re.Pattern = re.compile(r'\[decnet@55555\s+(.*?)\]', re.DOTALL)
_PARAM_RE: re.Pattern = re.compile(r'(\w+)="((?:[^"\\]|\\.)*)"')
_IP_FIELDS: tuple[str, ...] = ("src_ip", "src", "client_ip", "remote_ip", "ip")
_m: Optional[re.Match] = _RFC5424_RE.match(line)
if _m:
_ts_raw: str
_decky: str
_service: str
_event_type: str
_sd_rest: str
_ts_raw, _decky, _service, _event_type, _sd_rest = _m.groups()
_fields: dict[str, str] = {}
_msg: str = ""
if _sd_rest.startswith("-"):
_msg = _sd_rest[1:].lstrip()
elif _sd_rest.startswith("["):
_block: Optional[re.Match] = _SD_BLOCK_RE.search(_sd_rest)
if _block:
for _k, _v in _PARAM_RE.findall(_block.group(1)):
_fields[_k] = _v.replace('\\"', '"').replace("\\\\", "\\").replace("\\]", "]")
# extract msg after the block
_msg_match: Optional[re.Match] = re.search(r'\]\s+(.+)$', _sd_rest)
if _msg_match:
_msg = _msg_match.group(1).strip()
else:
_msg = _sd_rest
_attacker_ip: str = "Unknown"
for _fname in _IP_FIELDS:
if _fname in _fields:
_attacker_ip = _fields[_fname]
break
# Parse timestamp to normalize it
_ts_formatted: str
try:
_ts_formatted = datetime.fromisoformat(_ts_raw).strftime("%Y-%m-%d %H:%M:%S")
except ValueError:
_ts_formatted = _ts_raw
_payload: dict[str, Any] = {
"timestamp": _ts_formatted,
"decky": _decky,
"service": _service,
"event_type": _event_type,
"attacker_ip": _attacker_ip,
"fields": json.dumps(_fields),
"msg": _msg,
"raw_line": line
}
_get_json_logger().info(json.dumps(_payload))
except Exception:
pass
# ─── TCP forwarding ───────────────────────────────────────────────────────────
def forward_syslog(line: str, log_target: str) -> None: def forward_syslog(line: str, log_target: str) -> None:
"""Forward a syslog line over TCP to log_target (ip:port).""" """No-op stub. TCP forwarding is now handled by rsyslog, not by service containers."""
if not log_target: pass
return
try:
host, port = log_target.rsplit(":", 1)
with socket.create_connection((host, int(port)), timeout=3) as s:
s.sendall((line + "\n").encode())
except Exception:
pass

Some files were not shown because too many files have changed in this diff Show More