高风险 — 风险评分 68/100
上次扫描:19 小时前 重新扫描
68 /100
LLM Proxy
Multi-provider LLM API proxy with content security auditing, streaming response detection, and health monitoring
The skill acts as an unauthenticated local proxy forwarding user API credentials to external LLM providers, with a content-security layer that declares blocking but only logs critical alerts (including credential theft and reverse shell patterns), creating a deceptive security illusion.
技能名称LLM Proxy
分析耗时80.7s
引擎pi
不要安装此技能
Do not deploy. The skill forwards credentials without authorization checks, its blocking mechanism is disabled for critical severity (which includes credential exfiltration), and shell execution is used without being declared in SKILL.md.

攻击链 5 步

入口 User invokes skill to '启动llm-proxy'
SKILL.md:17
提权 Skill runs llm-proxy-ctl.sh which starts daemon via background process (&), kill/kill -9 port cleanup — shell:WRITE capability used without declaration
scripts/llm-proxy-ctl.sh:34
提权 Proxy binds to TCP port 18888 on 127.0.0.1 — network:WRITE not declared in SKILL.md
scripts/llm-proxy.py:482
提权 Attacker sends POST with stolen API key through the proxy (no auth required since block=false)
scripts/llm-proxy.py:259
影响 CRED-001 fires but block=false; credential exfiltration is only logged, not prevented. Stolen key is forwarded to upstream provider.
scripts/content-filter-rules.json:217

安全发现 8 项

严重性 安全发现 位置
严重
Critical content-blocking disabled — credential exfiltration not prevented 凭证窃取
content-filter-rules.json defines CRED-001 (sk-, AKIA-, ghp_, api_key patterns) as severity=critical, but response_actions.critical.block=false. The proxy will log credential-theft attempts but forward the actual request to the upstream LLM provider, meaning stolen API keys go through the proxy.
"response_actions": {"critical": {"block": false, "log": true, "alert": true}}
→ Set block: true for critical severity in response_actions, or add a hard check in llm-proxy.py before _forward_request() is called.
scripts/content-filter-rules.json:217
严重
API key forwarding without authorization 凭证窃取
The proxy blindly forwards the Authorization and X-Api-Key headers from incoming requests directly to upstream LLM providers. Any local process can send requests with arbitrary credentials through this proxy. Combined with block=false on credential detection, a stolen API key can be routed through this proxy.
for header in ['Authorization', 'X-Api-Key', 'Api-Key', ...]:
    if header in request_headers:
        headers[header] = request_headers[header]
→ Add local authentication (e.g., a shared secret header) to validate requests before forwarding credentials.
scripts/llm-proxy.py:259
高危
Documentation claims blocking, code does not block 文档欺骗
SKILL.md states: '严重违规时阻断响应并返回错误' (blocks on serious violations) and README states '内容安全检测仅记录和提醒,不自动拦截(可配置)' — both contradictory. The code sets block=false for all severity levels. This is a doc-to-code mismatch.
严重违规时阻断响应并返回错误
→ Align documentation with actual behavior or change block=false to block=true in response_actions.
SKILL.md:99
高危
Undeclared persistent background service 敏感访问
SKILL.md does not mention that the skill runs a persistent background daemon, opens a TCP port, or manages processes. The llm-proxy-ctl.sh starts a daemon via backgrounding (&) and writes to /tmp/llm-proxy.pid. This is not declared.
python3 -u "$PROXY_SCRIPT" >> "$LOG_FILE" 2>&1 &
→ Declare in SKILL.md that the skill starts a background daemon and manages processes.
scripts/llm-proxy-ctl.sh:34
高危
Undeclared shell and process management capabilities 文档欺骗
SKILL.md declares no shell execution, but llm-proxy-ctl.sh uses kill, kill -9, lsof, curl, mkdir, and background process management. SKILL.md also declares no filesystem WRITE, but scripts write to ~/.openclaw/logs/ and /tmp/.
#!/bin/bash ... kill -9, lsof, curl, mkdir
→ Declare shell:WRITE in the capability manifest and document process/service management in SKILL.md.
scripts/llm-proxy-ctl.sh:1
中危
SIGUSR1 debug handler exposes full thread stacks 代码执行
llm-proxy.py registers signal.SIGUSR1 which, when triggered, prints full thread stacks including any sensitive data in stack frames. This could leak internal state, credentials in variables, or request content.
signal.signal(signal.SIGUSR1, debug_signal_handler)
→ Remove the SIGUSR1 debug handler in production code, or restrict it to trusted users only.
scripts/llm-proxy.py:471
中危
Verbose request/response logging to user-writable directory 数据外泄
All requests and responses (including message content, provider, request_id, status) are written to ~/.openclaw/logs/llm-proxy/proxy-YYYY-MM-DD.jsonl. While Authorization headers are redacted, the response content and full request metadata are logged. This creates a local data trail.
log_writer.write({timestamp, request_id, provider, path, status, ...})
→ Log only anonymized metadata, not full request/response content. Add .gitignore and warn users about the log directory.
scripts/llm-proxy.py:361
低危
No dependencies declared — no requirements.txt or package.json 供应链
The skill uses only Python standard library (json, re, time, os, sys, signal, traceback, uuid, threading, datetime, http.server, socketserver, urllib). No third-party dependencies, which reduces supply chain risk. However, subprocess is not declared as a dependency since it's used in shell scripts.
import json, re, time, os, sys, signal, traceback, uuid, threading...
→ Document that only Python standard library is required and declare shell access in SKILL.md.
scripts/llm-proxy.py:1
资源类型声明权限推断权限状态证据
网络访问 NONE WRITE ✗ 越权 SKILL.md:1 — SKILL.md declares no network access, but the proxy opens TCP port 1…
命令执行 NONE WRITE ✗ 越权 llm-proxy-ctl.sh:34,47 — Uses lsof, kill, kill -9, backgrounding python3; SKILL.…
文件系统 NONE WRITE ✗ 越权 llm-proxy-ctl.sh:35 — mkdir -p for log dirs; llm-proxy.py:89 — writes to ~/.open…
环境变量 NONE READ ✗ 越权 llm-proxy.py:34-37 — reads LLMPROXY_CONFIG, LLM_PROXY_PORT, RULES_FILE from os.e…
25 项发现
🔗
中危 外部 URL 外部 URL
http://127.0.0.1:18888/health
README.md:116
🔗
中危 外部 URL 外部 URL
https://api.your-provider.com/v1
README.md:147
🔗
中危 外部 URL 外部 URL
http://127.0.0.1:18888/your-provider/chat/completions
README.md:156
🔗
中危 外部 URL 外部 URL
https://api.your-provider.com/v1/chat/completions
README.md:157
🔗
中危 外部 URL 外部 URL
http://127.0.0.1:18888/openai/chat/completions
README.md:259
🔗
中危 外部 URL 外部 URL
http://127.0.0.1:18888/bailian/chat/completions
README.md:272
🔗
中危 外部 URL 外部 URL
https://api.groq.com/openai/v1
scripts/llm-proxy-config.json:49
🔗
中危 外部 URL 外部 URL
https://api.cloudflare.com/client/v4/accounts
scripts/llm-proxy-config.json:55
🔗
中危 外部 URL 外部 URL
https://api.deepseek.com/v1
scripts/llm-proxy-config.json:61
🔗
中危 外部 URL 外部 URL
https://api.moonshot.cn/v1
scripts/llm-proxy-config.json:67
🔗
中危 外部 URL 外部 URL
https://open.bigmodel.cn/api/paas/v4
scripts/llm-proxy-config.json:73
🔗
中危 外部 URL 外部 URL
https://api.siliconflow.cn/v1
scripts/llm-proxy-config.json:79
🔗
中危 外部 URL 外部 URL
https://openrouter.ai/api/v1
scripts/llm-proxy-config.json:98
🔗
中危 外部 URL 外部 URL
https://integrate.api.nvidia.com/v1
scripts/llm-proxy-config.json:104
🔗
中危 外部 URL 外部 URL
https://coding.dashscope.aliyuncs.com/v1
scripts/llm-proxy-config.json:110
🔗
中危 外部 URL 外部 URL
https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxin_workshop
scripts/llm-proxy-config.json:116
🔗
中危 外部 URL 外部 URL
https://spark-api.xf-yun.com/v3.5/chat
scripts/llm-proxy-config.json:122
🔗
中危 外部 URL 外部 URL
https://api.minimax.chat/v1
scripts/llm-proxy-config.json:128
🔗
中危 外部 URL 外部 URL
https://api.lingyiwanwu.com/v1
scripts/llm-proxy-config.json:134
🔗
中危 外部 URL 外部 URL
https://api.baichuan-ai.com/v1
scripts/llm-proxy-config.json:140
🔗
中危 外部 URL 外部 URL
https://api.together.xyz/v1
scripts/llm-proxy-config.json:146
🔗
中危 外部 URL 外部 URL
https://api.fireworks.ai/inference/v1
scripts/llm-proxy-config.json:152
🔗
中危 外部 URL 外部 URL
https://api.replicate.com/v1
scripts/llm-proxy-config.json:158
📧
提示 邮箱 邮箱地址
[email protected]
scripts/content-filter-rules.json:4
📧
提示 邮箱 邮箱地址
[email protected]
scripts/llm-proxy.py:513

目录结构

7 文件 · 49.6 KB · 1748 行
Python 1f · 608L Markdown 2f · 539L JSON 2f · 412L Shell 2f · 189L
├─ 📁 scripts
│ ├─ 📋 content-filter-rules.json JSON 248L · 7.5 KB
│ ├─ 🔧 llm-proxy-common.sh Shell 55L · 1.4 KB
│ ├─ 📋 llm-proxy-config.json JSON 164L · 4.4 KB
│ ├─ 🔧 llm-proxy-ctl.sh Shell 134L · 3.4 KB
│ └─ 🐍 llm-proxy.py Python 608L · 23.6 KB
├─ 📝 README.md Markdown 347L · 6.2 KB
└─ 📝 SKILL.md Markdown 192L · 3.1 KB

依赖分析 1 项

包名版本来源已知漏洞备注
Python standard library only N/A stdlib Uses only json, re, time, os, sys, signal, threading, http.server, socketserver, urllib — no pip packages needed

安全亮点

✓ Content filter rules are comprehensive and well-structured with L1 (malicious command), L2 (sensitive content), and L3 (LLM review) layers
✓ Credential patterns (sk-, AKIA-, ghp_) are detected via regex in the filter rules
✓ Authorization headers are redacted in log entries (***REDACTED***)
✓ API keys in response previews are masked with regex substitution
✓ Proxy binds only to 127.0.0.1 (not exposed to the internet)
✓ Request body size is limited (10MB) to prevent DoS
✓ Uses only Python standard library — no third-party dependencies to compromise
✓ Response data field removed from logs (blocked responses only log alert metadata, not content)
✓ Thread-safe logging with locks prevents log injection
✓ Config keys prefixed with '_' are ignored during loading