Scan Report
68 /100
LLM Proxy
Multi-provider LLM API proxy with content security auditing, streaming response detection, and health monitoring
The skill acts as an unauthenticated local proxy forwarding user API credentials to external LLM providers, with a content-security layer that declares blocking but only logs critical alerts (including credential theft and reverse shell patterns), creating a deceptive security illusion.
Do not install this skill
Do not deploy. The skill forwards credentials without authorization checks, its blocking mechanism is disabled for critical severity (which includes credential exfiltration), and shell execution is used without being declared in SKILL.md.
Attack Chain 5 steps
◎
Entry User invokes skill to '启动llm-proxy'
SKILL.md:17⬡
Escalation Skill runs llm-proxy-ctl.sh which starts daemon via background process (&), kill/kill -9 port cleanup — shell:WRITE capability used without declaration
scripts/llm-proxy-ctl.sh:34⬡
Escalation Proxy binds to TCP port 18888 on
127.0.0.1 — network:WRITE not declared in SKILL.md scripts/llm-proxy.py:482⬡
Escalation Attacker sends POST with stolen API key through the proxy (no auth required since block=false)
scripts/llm-proxy.py:259◉
Impact CRED-001 fires but block=false; credential exfiltration is only logged, not prevented. Stolen key is forwarded to upstream provider.
scripts/content-filter-rules.json:217Findings 8 items
| Severity | Finding | Location |
|---|---|---|
| Critical | Critical content-blocking disabled — credential exfiltration not prevented Credential Theft | scripts/content-filter-rules.json:217 |
| Critical | API key forwarding without authorization Credential Theft | scripts/llm-proxy.py:259 |
| High | Documentation claims blocking, code does not block Doc Mismatch | SKILL.md:99 |
| High | Undeclared persistent background service Sensitive Access | scripts/llm-proxy-ctl.sh:34 |
| High | Undeclared shell and process management capabilities Doc Mismatch | scripts/llm-proxy-ctl.sh:1 |
| Medium | SIGUSR1 debug handler exposes full thread stacks RCE | scripts/llm-proxy.py:471 |
| Medium | Verbose request/response logging to user-writable directory Data Exfil | scripts/llm-proxy.py:361 |
| Low | No dependencies declared — no requirements.txt or package.json Supply Chain | scripts/llm-proxy.py:1 |
| Resource | Declared | Inferred | Status | Evidence |
|---|---|---|---|---|
| Network | NONE | WRITE | ✗ Violation | SKILL.md:1 — SKILL.md declares no network access, but the proxy opens TCP port 1… |
| Shell | NONE | WRITE | ✗ Violation | llm-proxy-ctl.sh:34,47 — Uses lsof, kill, kill -9, backgrounding python3; SKILL.… |
| Filesystem | NONE | WRITE | ✗ Violation | llm-proxy-ctl.sh:35 — mkdir -p for log dirs; llm-proxy.py:89 — writes to ~/.open… |
| Environment | NONE | READ | ✗ Violation | llm-proxy.py:34-37 — reads LLMPROXY_CONFIG, LLM_PROXY_PORT, RULES_FILE from os.e… |
25 findings
Medium External URL 外部 URL
http://127.0.0.1:18888/health README.md:116 Medium External URL 外部 URL
https://api.your-provider.com/v1 README.md:147 Medium External URL 外部 URL
http://127.0.0.1:18888/your-provider/chat/completions README.md:156 Medium External URL 外部 URL
https://api.your-provider.com/v1/chat/completions README.md:157 Medium External URL 外部 URL
http://127.0.0.1:18888/openai/chat/completions README.md:259 Medium External URL 外部 URL
http://127.0.0.1:18888/bailian/chat/completions README.md:272 Medium External URL 外部 URL
https://api.groq.com/openai/v1 scripts/llm-proxy-config.json:49 Medium External URL 外部 URL
https://api.cloudflare.com/client/v4/accounts scripts/llm-proxy-config.json:55 Medium External URL 外部 URL
https://api.deepseek.com/v1 scripts/llm-proxy-config.json:61 Medium External URL 外部 URL
https://api.moonshot.cn/v1 scripts/llm-proxy-config.json:67 Medium External URL 外部 URL
https://open.bigmodel.cn/api/paas/v4 scripts/llm-proxy-config.json:73 Medium External URL 外部 URL
https://api.siliconflow.cn/v1 scripts/llm-proxy-config.json:79 Medium External URL 外部 URL
https://openrouter.ai/api/v1 scripts/llm-proxy-config.json:98 Medium External URL 外部 URL
https://integrate.api.nvidia.com/v1 scripts/llm-proxy-config.json:104 Medium External URL 外部 URL
https://coding.dashscope.aliyuncs.com/v1 scripts/llm-proxy-config.json:110 Medium External URL 外部 URL
https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxin_workshop scripts/llm-proxy-config.json:116 Medium External URL 外部 URL
https://spark-api.xf-yun.com/v3.5/chat scripts/llm-proxy-config.json:122 Medium External URL 外部 URL
https://api.minimax.chat/v1 scripts/llm-proxy-config.json:128 Medium External URL 外部 URL
https://api.lingyiwanwu.com/v1 scripts/llm-proxy-config.json:134 Medium External URL 外部 URL
https://api.baichuan-ai.com/v1 scripts/llm-proxy-config.json:140 Medium External URL 外部 URL
https://api.together.xyz/v1 scripts/llm-proxy-config.json:146 Medium External URL 外部 URL
https://api.fireworks.ai/inference/v1 scripts/llm-proxy-config.json:152 Medium External URL 外部 URL
https://api.replicate.com/v1 scripts/llm-proxy-config.json:158 Info Email 邮箱地址
[email protected] scripts/content-filter-rules.json:4 Info Email 邮箱地址
[email protected] scripts/llm-proxy.py:513 File Tree
7 files · 49.6 KB · 1748 lines Python 1f · 608L
Markdown 2f · 539L
JSON 2f · 412L
Shell 2f · 189L
├─
▾
scripts
│ ├─
content-filter-rules.json
JSON
│ ├─
llm-proxy-common.sh
Shell
│ ├─
llm-proxy-config.json
JSON
│ ├─
llm-proxy-ctl.sh
Shell
│ └─
llm-proxy.py
Python
├─
README.md
Markdown
└─
SKILL.md
Markdown
Dependencies 1 items
| Package | Version | Source | Known Vulns | Notes |
|---|---|---|---|---|
Python standard library only | N/A | stdlib | No | Uses only json, re, time, os, sys, signal, threading, http.server, socketserver, urllib — no pip packages needed |
Security Positives
✓ Content filter rules are comprehensive and well-structured with L1 (malicious command), L2 (sensitive content), and L3 (LLM review) layers
✓ Credential patterns (sk-, AKIA-, ghp_) are detected via regex in the filter rules
✓ Authorization headers are redacted in log entries (***REDACTED***)
✓ API keys in response previews are masked with regex substitution
✓ Proxy binds only to 127.0.0.1 (not exposed to the internet)
✓ Request body size is limited (10MB) to prevent DoS
✓ Uses only Python standard library — no third-party dependencies to compromise
✓ Response data field removed from logs (blocked responses only log alert metadata, not content)
✓ Thread-safe logging with locks prevents log injection
✓ Config keys prefixed with '_' are ignored during loading