Scan Report
15 /100
opencrawl
Crawl any JavaScript-rendered webpage through distributed real Chrome browsers
OpenCrawl is a straightforward web-crawling skill that proxies requests through a distributed Chrome browser pool. The implementation is clean with no obfuscation, no credential harvesting, no shell execution, and full doc-to-code alignment.
Safe to install
Safe to use. The hardcoded default IP (39.105.206.76) is acceptable as a public server default but users should be aware all crawl requests route through it. For maximum privacy, deploy a self-hosted instance.
Findings 2 items
| Severity | Finding | Location |
|---|---|---|
| Low | Hardcoded default API endpoint IP address Sensitive Access | tools/crawl.py:18 |
| Low | Dependency version not strictly pinned Supply Chain | requirements.txt:1 |
| Resource | Declared | Inferred | Status | Evidence |
|---|---|---|---|---|
| Filesystem | NONE | NONE | — | crawl.py:1-174 — No file read/write operations |
| Network | READ | READ | ✓ Aligned | crawl.py:44-59 — requests.post to API, requests.get for R2 download URL |
| Shell | WRITE | WRITE | ✓ Aligned | SKILL.md declares Bash tool; crawl.py is executed via python3 by the agent |
| Environment | READ | READ | ✓ Aligned | crawl.py:17-18 — reads OPENCRAWL_API_KEY and OPENCRAWL_API_URL only |
| Skill Invoke | NONE | NONE | — | No inter-skill invocation detected |
| Clipboard | NONE | NONE | — | No clipboard access in crawl.py |
| Browser | NONE | NONE | — | No local browser; remote Chrome workers are accessed via API only |
| Database | NONE | NONE | — | No database access |
1 High 4 findings
High IP Address 硬编码 IP 地址
39.105.206.76 README.md:15 Medium External URL 外部 URL
http://39.105.206.76:9877 README.md:19 Medium External URL 外部 URL
https://clawhub.ai/hlyylly/chromeopencrawl README.md:58 Medium External URL 外部 URL
https://www.smzdm.com/p/170177008/ SKILL.md:55 File Tree
4 files · 11.5 KB · 364 lines Markdown 2f · 189L
Python 1f · 174L
Text 1f · 1L
├─
▾
tools
│ └─
crawl.py
Python
├─
README.md
Markdown
├─
requirements.txt
Text
└─
SKILL.md
Markdown
Dependencies 1 items
| Package | Version | Source | Known Vulns | Notes |
|---|---|---|---|---|
requests | >=2.28.0 | pip | No | Version not strictly pinned — minor/patch updates allowed |
Security Positives
✓ No obfuscation detected — no base64, no eval(), no dynamic code execution
✓ Full doc-to-code alignment — all 5 declared commands (crawl, search, balance, status, raw/lite modes) are implemented in crawl.py
✓ No credential exfiltration — API key is used only for Bearer token auth, never read from environment for export
✓ No shell injection vectors — all arguments are passed as argparse parameters, not concatenated into shell strings
✓ No sensitive path access — script does not read ~/.ssh, ~/.aws, .env, or any credential files
✓ No persistence mechanisms — no cron jobs, startup hooks, or backdoor installations
✓ Minimal attack surface — only 174 lines of straightforward API wrapper code
✓ No hidden functionality — no secret subcommands or undocumented endpoints
✓ Clear error handling — all exceptions are caught and reported via JSON stderr
✓ API key is scoped — only reads OPENCRAWL_API_KEY and OPENCRAWL_API_URL from environment