Low Risk — Risk Score 15/100
Last scan:21 hr ago Rescan
15 /100
opencrawl
Crawl any JavaScript-rendered webpage through distributed real Chrome browsers
OpenCrawl is a straightforward web-crawling skill that proxies requests through a distributed Chrome browser pool. The implementation is clean with no obfuscation, no credential harvesting, no shell execution, and full doc-to-code alignment.
Skill Nameopencrawl
Duration41.0s
Enginepi
Safe to install
Safe to use. The hardcoded default IP (39.105.206.76) is acceptable as a public server default but users should be aware all crawl requests route through it. For maximum privacy, deploy a self-hosted instance.

Findings 2 items

Severity Finding Location
Low
Hardcoded default API endpoint IP address Sensitive Access
The default API_URL is hardcoded as 'http://39.105.206.76:9877' in crawl.py:18. This IP appears in both README.md and SKILL.md. While it has a plausible explanation (public hosted server), routing all requests through a third-party IP introduces a dependency on an external, non-TLS endpoint.
API_URL = os.environ.get("OPENCRAWL_API_URL", "http://39.105.206.76:9877")
→ Use HTTPS and make the default a domain name rather than raw IP. Users should configure their own self-hosted instance for production privacy.
tools/crawl.py:18
Low
Dependency version not strictly pinned Supply Chain
requirements.txt specifies 'requests>=2.28.0' without an upper bound, allowing automatic minor/patch updates. This could theoretically allow a malicious package update, though no known vulnerabilities exist at current version.
requests>=2.28.0
→ Pin to a specific version, e.g., 'requests==2.31.0', for reproducible builds.
requirements.txt:1
ResourceDeclaredInferredStatusEvidence
Filesystem NONE NONE crawl.py:1-174 — No file read/write operations
Network READ READ ✓ Aligned crawl.py:44-59 — requests.post to API, requests.get for R2 download URL
Shell WRITE WRITE ✓ Aligned SKILL.md declares Bash tool; crawl.py is executed via python3 by the agent
Environment READ READ ✓ Aligned crawl.py:17-18 — reads OPENCRAWL_API_KEY and OPENCRAWL_API_URL only
Skill Invoke NONE NONE No inter-skill invocation detected
Clipboard NONE NONE No clipboard access in crawl.py
Browser NONE NONE No local browser; remote Chrome workers are accessed via API only
Database NONE NONE No database access
1 High 4 findings
📡
High IP Address 硬编码 IP 地址
39.105.206.76
README.md:15
🔗
Medium External URL 外部 URL
http://39.105.206.76:9877
README.md:19
🔗
Medium External URL 外部 URL
https://clawhub.ai/hlyylly/chromeopencrawl
README.md:58
🔗
Medium External URL 外部 URL
https://www.smzdm.com/p/170177008/
SKILL.md:55

File Tree

4 files · 11.5 KB · 364 lines
Markdown 2f · 189L Python 1f · 174L Text 1f · 1L
├─ 📁 tools
│ └─ 🐍 crawl.py Python 174L · 5.4 KB
├─ 📝 README.md Markdown 62L · 2.0 KB
├─ 📄 requirements.txt Text 1L · 17 B
└─ 📝 SKILL.md Markdown 127L · 4.1 KB

Dependencies 1 items

PackageVersionSourceKnown VulnsNotes
requests >=2.28.0 pip No Version not strictly pinned — minor/patch updates allowed

Security Positives

✓ No obfuscation detected — no base64, no eval(), no dynamic code execution
✓ Full doc-to-code alignment — all 5 declared commands (crawl, search, balance, status, raw/lite modes) are implemented in crawl.py
✓ No credential exfiltration — API key is used only for Bearer token auth, never read from environment for export
✓ No shell injection vectors — all arguments are passed as argparse parameters, not concatenated into shell strings
✓ No sensitive path access — script does not read ~/.ssh, ~/.aws, .env, or any credential files
✓ No persistence mechanisms — no cron jobs, startup hooks, or backdoor installations
✓ Minimal attack surface — only 174 lines of straightforward API wrapper code
✓ No hidden functionality — no secret subcommands or undocumented endpoints
✓ Clear error handling — all exceptions are caught and reported via JSON stderr
✓ API key is scoped — only reads OPENCRAWL_API_KEY and OPENCRAWL_API_URL from environment