High Risk — Risk Score 68/100
Last scan:19 hr ago Rescan
68 /100
LLM Proxy
Multi-provider LLM API proxy with content security auditing, streaming response detection, and health monitoring
The skill acts as an unauthenticated local proxy forwarding user API credentials to external LLM providers, with a content-security layer that declares blocking but only logs critical alerts (including credential theft and reverse shell patterns), creating a deceptive security illusion.
Skill NameLLM Proxy
Duration80.7s
Enginepi
Do not install this skill
Do not deploy. The skill forwards credentials without authorization checks, its blocking mechanism is disabled for critical severity (which includes credential exfiltration), and shell execution is used without being declared in SKILL.md.

Attack Chain 5 steps

Entry User invokes skill to '启动llm-proxy'
SKILL.md:17
Escalation Skill runs llm-proxy-ctl.sh which starts daemon via background process (&), kill/kill -9 port cleanup — shell:WRITE capability used without declaration
scripts/llm-proxy-ctl.sh:34
Escalation Proxy binds to TCP port 18888 on 127.0.0.1 — network:WRITE not declared in SKILL.md
scripts/llm-proxy.py:482
Escalation Attacker sends POST with stolen API key through the proxy (no auth required since block=false)
scripts/llm-proxy.py:259
Impact CRED-001 fires but block=false; credential exfiltration is only logged, not prevented. Stolen key is forwarded to upstream provider.
scripts/content-filter-rules.json:217

Findings 8 items

Severity Finding Location
Critical
Critical content-blocking disabled — credential exfiltration not prevented Credential Theft
content-filter-rules.json defines CRED-001 (sk-, AKIA-, ghp_, api_key patterns) as severity=critical, but response_actions.critical.block=false. The proxy will log credential-theft attempts but forward the actual request to the upstream LLM provider, meaning stolen API keys go through the proxy.
"response_actions": {"critical": {"block": false, "log": true, "alert": true}}
→ Set block: true for critical severity in response_actions, or add a hard check in llm-proxy.py before _forward_request() is called.
scripts/content-filter-rules.json:217
Critical
API key forwarding without authorization Credential Theft
The proxy blindly forwards the Authorization and X-Api-Key headers from incoming requests directly to upstream LLM providers. Any local process can send requests with arbitrary credentials through this proxy. Combined with block=false on credential detection, a stolen API key can be routed through this proxy.
for header in ['Authorization', 'X-Api-Key', 'Api-Key', ...]:
    if header in request_headers:
        headers[header] = request_headers[header]
→ Add local authentication (e.g., a shared secret header) to validate requests before forwarding credentials.
scripts/llm-proxy.py:259
High
Documentation claims blocking, code does not block Doc Mismatch
SKILL.md states: '严重违规时阻断响应并返回错误' (blocks on serious violations) and README states '内容安全检测仅记录和提醒,不自动拦截(可配置)' — both contradictory. The code sets block=false for all severity levels. This is a doc-to-code mismatch.
严重违规时阻断响应并返回错误
→ Align documentation with actual behavior or change block=false to block=true in response_actions.
SKILL.md:99
High
Undeclared persistent background service Sensitive Access
SKILL.md does not mention that the skill runs a persistent background daemon, opens a TCP port, or manages processes. The llm-proxy-ctl.sh starts a daemon via backgrounding (&) and writes to /tmp/llm-proxy.pid. This is not declared.
python3 -u "$PROXY_SCRIPT" >> "$LOG_FILE" 2>&1 &
→ Declare in SKILL.md that the skill starts a background daemon and manages processes.
scripts/llm-proxy-ctl.sh:34
High
Undeclared shell and process management capabilities Doc Mismatch
SKILL.md declares no shell execution, but llm-proxy-ctl.sh uses kill, kill -9, lsof, curl, mkdir, and background process management. SKILL.md also declares no filesystem WRITE, but scripts write to ~/.openclaw/logs/ and /tmp/.
#!/bin/bash ... kill -9, lsof, curl, mkdir
→ Declare shell:WRITE in the capability manifest and document process/service management in SKILL.md.
scripts/llm-proxy-ctl.sh:1
Medium
SIGUSR1 debug handler exposes full thread stacks RCE
llm-proxy.py registers signal.SIGUSR1 which, when triggered, prints full thread stacks including any sensitive data in stack frames. This could leak internal state, credentials in variables, or request content.
signal.signal(signal.SIGUSR1, debug_signal_handler)
→ Remove the SIGUSR1 debug handler in production code, or restrict it to trusted users only.
scripts/llm-proxy.py:471
Medium
Verbose request/response logging to user-writable directory Data Exfil
All requests and responses (including message content, provider, request_id, status) are written to ~/.openclaw/logs/llm-proxy/proxy-YYYY-MM-DD.jsonl. While Authorization headers are redacted, the response content and full request metadata are logged. This creates a local data trail.
log_writer.write({timestamp, request_id, provider, path, status, ...})
→ Log only anonymized metadata, not full request/response content. Add .gitignore and warn users about the log directory.
scripts/llm-proxy.py:361
Low
No dependencies declared — no requirements.txt or package.json Supply Chain
The skill uses only Python standard library (json, re, time, os, sys, signal, traceback, uuid, threading, datetime, http.server, socketserver, urllib). No third-party dependencies, which reduces supply chain risk. However, subprocess is not declared as a dependency since it's used in shell scripts.
import json, re, time, os, sys, signal, traceback, uuid, threading...
→ Document that only Python standard library is required and declare shell access in SKILL.md.
scripts/llm-proxy.py:1
ResourceDeclaredInferredStatusEvidence
Network NONE WRITE ✗ Violation SKILL.md:1 — SKILL.md declares no network access, but the proxy opens TCP port 1…
Shell NONE WRITE ✗ Violation llm-proxy-ctl.sh:34,47 — Uses lsof, kill, kill -9, backgrounding python3; SKILL.…
Filesystem NONE WRITE ✗ Violation llm-proxy-ctl.sh:35 — mkdir -p for log dirs; llm-proxy.py:89 — writes to ~/.open…
Environment NONE READ ✗ Violation llm-proxy.py:34-37 — reads LLMPROXY_CONFIG, LLM_PROXY_PORT, RULES_FILE from os.e…
25 findings
🔗
Medium External URL 外部 URL
http://127.0.0.1:18888/health
README.md:116
🔗
Medium External URL 外部 URL
https://api.your-provider.com/v1
README.md:147
🔗
Medium External URL 外部 URL
http://127.0.0.1:18888/your-provider/chat/completions
README.md:156
🔗
Medium External URL 外部 URL
https://api.your-provider.com/v1/chat/completions
README.md:157
🔗
Medium External URL 外部 URL
http://127.0.0.1:18888/openai/chat/completions
README.md:259
🔗
Medium External URL 外部 URL
http://127.0.0.1:18888/bailian/chat/completions
README.md:272
🔗
Medium External URL 外部 URL
https://api.groq.com/openai/v1
scripts/llm-proxy-config.json:49
🔗
Medium External URL 外部 URL
https://api.cloudflare.com/client/v4/accounts
scripts/llm-proxy-config.json:55
🔗
Medium External URL 外部 URL
https://api.deepseek.com/v1
scripts/llm-proxy-config.json:61
🔗
Medium External URL 外部 URL
https://api.moonshot.cn/v1
scripts/llm-proxy-config.json:67
🔗
Medium External URL 外部 URL
https://open.bigmodel.cn/api/paas/v4
scripts/llm-proxy-config.json:73
🔗
Medium External URL 外部 URL
https://api.siliconflow.cn/v1
scripts/llm-proxy-config.json:79
🔗
Medium External URL 外部 URL
https://openrouter.ai/api/v1
scripts/llm-proxy-config.json:98
🔗
Medium External URL 外部 URL
https://integrate.api.nvidia.com/v1
scripts/llm-proxy-config.json:104
🔗
Medium External URL 外部 URL
https://coding.dashscope.aliyuncs.com/v1
scripts/llm-proxy-config.json:110
🔗
Medium External URL 外部 URL
https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxin_workshop
scripts/llm-proxy-config.json:116
🔗
Medium External URL 外部 URL
https://spark-api.xf-yun.com/v3.5/chat
scripts/llm-proxy-config.json:122
🔗
Medium External URL 外部 URL
https://api.minimax.chat/v1
scripts/llm-proxy-config.json:128
🔗
Medium External URL 外部 URL
https://api.lingyiwanwu.com/v1
scripts/llm-proxy-config.json:134
🔗
Medium External URL 外部 URL
https://api.baichuan-ai.com/v1
scripts/llm-proxy-config.json:140
🔗
Medium External URL 外部 URL
https://api.together.xyz/v1
scripts/llm-proxy-config.json:146
🔗
Medium External URL 外部 URL
https://api.fireworks.ai/inference/v1
scripts/llm-proxy-config.json:152
🔗
Medium External URL 外部 URL
https://api.replicate.com/v1
scripts/llm-proxy-config.json:158
📧
Info Email 邮箱地址
[email protected]
scripts/content-filter-rules.json:4
📧
Info Email 邮箱地址
[email protected]
scripts/llm-proxy.py:513

File Tree

7 files · 49.6 KB · 1748 lines
Python 1f · 608L Markdown 2f · 539L JSON 2f · 412L Shell 2f · 189L
├─ 📁 scripts
│ ├─ 📋 content-filter-rules.json JSON 248L · 7.5 KB
│ ├─ 🔧 llm-proxy-common.sh Shell 55L · 1.4 KB
│ ├─ 📋 llm-proxy-config.json JSON 164L · 4.4 KB
│ ├─ 🔧 llm-proxy-ctl.sh Shell 134L · 3.4 KB
│ └─ 🐍 llm-proxy.py Python 608L · 23.6 KB
├─ 📝 README.md Markdown 347L · 6.2 KB
└─ 📝 SKILL.md Markdown 192L · 3.1 KB

Dependencies 1 items

PackageVersionSourceKnown VulnsNotes
Python standard library only N/A stdlib No Uses only json, re, time, os, sys, signal, threading, http.server, socketserver, urllib — no pip packages needed

Security Positives

✓ Content filter rules are comprehensive and well-structured with L1 (malicious command), L2 (sensitive content), and L3 (LLM review) layers
✓ Credential patterns (sk-, AKIA-, ghp_) are detected via regex in the filter rules
✓ Authorization headers are redacted in log entries (***REDACTED***)
✓ API keys in response previews are masked with regex substitution
✓ Proxy binds only to 127.0.0.1 (not exposed to the internet)
✓ Request body size is limited (10MB) to prevent DoS
✓ Uses only Python standard library — no third-party dependencies to compromise
✓ Response data field removed from logs (blocked responses only log alert metadata, not content)
✓ Thread-safe logging with locks prevents log injection
✓ Config keys prefixed with '_' are ignored during loading