Evidence-BasedMulti-DimensionalReviewable ReportsFree

Code paths, network calls, dependency risk — deliverable verdict in minutes.

clawsafe · scan_engine · readyv2.4.1
4601 Samples reviewed
341 Threats and anomalies found
5 New risky samples in 7 days
Platform misses surfaced
Risk Intelligence Live

High-Risk Skills Identified

View full risk board →

Not a popularity chart. These are skills flagged for blocking or manual review, ranked by risk score.

# Skill Verdict Risk Source Age
01 math-calculator Block
100
GitHub Apr 2, 2026
02 messenger_send_node Block
95
Manual upload Apr 4, 2026
03 vnstock-env-setup Block
92
Manual upload Apr 5, 2026
04 luci-memory Block
85
Manual upload Apr 5, 2026
05 ludwitt-university High Risk
75
ClawHub Apr 12, 2026
06 memolecard-auto High Risk
75
Manual upload Apr 5, 2026
07 hive-commander High Risk
75
Manual upload Apr 5, 2026
08 boss-ai-assistant High Risk
75
Manual upload Apr 4, 2026
View full risk board 341 high-risk samples identified
Trust Criteria

How we decide whether a skill deserves trust

Not from demos, not from download counts, but from the evidence left behind in code, metadata, and runtime intent.

01
Declared vs actual capability

We line up claimed resources against the real shell, network, filesystem, and environment behavior inferred from the code.

02
Hidden execution and egress

We check for encoded execution, command composition, outbound URLs, hard-coded IPs, credential patterns, and dangerous command chains.

03
Supply chain and dependency hygiene

We look for unpinned packages, known vulnerabilities, suspicious download paths, and third-party components that widen the risk surface.

04
Reviewable evidence

Every judgment should land on files, lines, artifacts, dependencies, and attack-chain steps instead of ending at a score.

Live Signals

Recently reviewed skills

These samples show the issues the system is seeing in real submissions. The real user question is not what a skill does, but whether it deserves installation.

Report Contract
01

What does the skill claim to do, and what does the code actually do?

02

Does it hide execution, egress, credential activity, or an identifiable attack chain?

03

Which files, dependencies, or artifacts are the key reasons it should not be trusted?

04

If the team keeps using it, should they block, isolate, or manually review it next?

Decision Flow

From input to install decision in four steps

The goal is not to keep users on the scan page. It is to move them to an actionable decision as fast as possible.

01
Receive input

Repos, skill pages, archives, and raw URLs enter the same review pipeline.

02
Extract evidence

We first read the file tree, sensitive files, artifacts, dependencies, and declared metadata.

03
Infer intent

We then compare declared capability with actual behavior to see whether the skill drifts, deceives, or attacks.

04
Make the decision

The report ends in a block, review, or allow recommendation with evidence attached.

Next Action

Throw the skill in first. Decide whether to trust it after.

If you already have a target to review, start now. The output is structured for engineering, security, and audit teams to act on together.

No account required · Free for open repos