Code paths, network calls, dependency risk — deliverable verdict in minutes.
High-Risk Skills Identified
View full risk board →Not a popularity chart. These are skills flagged for blocking or manual review, ranked by risk score.
| # | Skill | Verdict | Risk |
|---|---|---|---|
| 01 | math-calculator | Block | 100 |
| 02 | messenger_send_node | Block | 95 |
| 03 | vnstock-env-setup | Block | 92 |
| 04 | luci-memory | Block | 85 |
| 05 | ludwitt-university | High Risk | 75 |
| 06 | memolecard-auto | High Risk | 75 |
| 07 | hive-commander | High Risk | 75 |
| 08 | boss-ai-assistant | High Risk | 75 |
How we decide whether a skill deserves trust
Not from demos, not from download counts, but from the evidence left behind in code, metadata, and runtime intent.
We line up claimed resources against the real shell, network, filesystem, and environment behavior inferred from the code.
We check for encoded execution, command composition, outbound URLs, hard-coded IPs, credential patterns, and dangerous command chains.
We look for unpinned packages, known vulnerabilities, suspicious download paths, and third-party components that widen the risk surface.
Every judgment should land on files, lines, artifacts, dependencies, and attack-chain steps instead of ending at a score.
Recently reviewed skills
These samples show the issues the system is seeing in real submissions. The real user question is not what a skill does, but whether it deserves installation.
What does the skill claim to do, and what does the code actually do?
Does it hide execution, egress, credential activity, or an identifiable attack chain?
Which files, dependencies, or artifacts are the key reasons it should not be trusted?
If the team keeps using it, should they block, isolate, or manually review it next?
From input to install decision in four steps
The goal is not to keep users on the scan page. It is to move them to an actionable decision as fast as possible.
Repos, skill pages, archives, and raw URLs enter the same review pipeline.
We first read the file tree, sensitive files, artifacts, dependencies, and declared metadata.
We then compare declared capability with actual behavior to see whether the skill drifts, deceives, or attacks.
The report ends in a block, review, or allow recommendation with evidence attached.
Throw the skill in first. Decide whether to trust it after.
If you already have a target to review, start now. The output is structured for engineering, security, and audit teams to act on together.
No account required · Free for open repos