Czy Llm Plugin Artifacts jest bezpieczny?

Llm Plugin Artifacts — Nerq Wynik zaufania 68.0/100 (Ocena C). Na podstawie analizy 5 wymiarów zaufania, jest ogólnie bezpieczny, ale z pewnymi zastrzeżeniami. Ostatnia aktualizacja: 2026-04-02.

Używaj Llm Plugin Artifacts z ostrożnością. Llm Plugin Artifacts is a software tool with a Nerq Wynik zaufania of 68.0/100 (C), based on 5 niezależnych wymiarów danych. Jest poniżej zalecanego progu wynoszącego 70. Bezpieczeństwo: 0/100. Konserwacja: 1/100. Popularity: 0/100. Dane pochodzą z multiple public sources including package registries, GitHub, NVD, OSV.dev, and OpenSSF Scorecard. Ostatnia aktualizacja: 2026-04-02. Dane odczytywalne maszynowo (JSON).

Czy Llm Plugin Artifacts jest bezpieczny?

OSTROŻNOŚĆ — Llm Plugin Artifacts has a Nerq Wynik zaufania of 68.0/100 (C). Ma umiarkowane sygnały zaufania, ale wykazuje pewne obszary budzące uwagę. Nadaje się do użytku deweloperskiego — sprawdź sygnały bezpieczeństwa i konserwacji przed wdrożeniem produkcyjnym.

Analiza bezpieczeństwa → Raport prywatności {name} →

Jaki jest wynik zaufania Llm Plugin Artifacts?

Llm Plugin Artifacts has a Nerq Wynik zaufania of 68.0/100, earning a C grade. This score is based on 5 independently measured wymiarów including bezpieczeństwo, konserwacja, and przyjęcie przez społeczność.

Bezpieczeństwo
0
Zgodność
100
Konserwacja
1
Dokumentacja
0
Popularność
0

Jakie są kluczowe ustalenia bezpieczeństwa dla Llm Plugin Artifacts?

Llm Plugin Artifacts's strongest signal is zgodność at 100/100. No known vulnerabilities have been detected. It has not yet reached the Nerq Verified threshold of 70+.

Wynik bezpieczeństwa: 0/100 (weak)
Konserwacja: 1/100 — niska aktywność utrzymania
Compliance: 100/100 — covers 52 of 52 jurisdictions
Documentation: 0/100 — ograniczona dokumentacja
Popularity: 0/100 — przyjęcie przez społeczność

Czym jest Llm Plugin Artifacts i kto go utrzymuje?

Autoreggmasonvalue
Kategoriacoding
Źródłohttps://github.com/eggmasonvalue/llm-plugin-artifacts

Zgodność z przepisami

EU AI Act Risk ClassMINIMAL
Compliance Score100/100
JurisdictionsAssessed across 52 jurisdictions

Popularne alternatywy w coding

Significant-Gravitas/AutoGPT
74.7/100 · B
github
ollama/ollama
73.8/100 · B
github
langchain-ai/langchain
86.4/100 · A
github
x1xhlol/system-prompts-and-models-of-ai-tools
73.8/100 · B
github
anomalyco/opencode
87.9/100 · A
github

What Is Llm Plugin Artifacts?

Llm Plugin Artifacts is a software tool in the coding category: Manages custom skills and workflows for Antigravity.. Nerq Wynik zaufania: 68/100 (C).

Nerq independently analyzes every software tool, app, and extension across multiple trust signals including bezpieczeństwo vulnerabilities, konserwacja activity, license zgodność, and przyjęcie przez społeczność.

How Nerq Assesses Llm Plugin Artifacts's Safety

Nerq's Wynik zaufania is calculated from 13+ independent signals aggregated into five wymiarów. Here is how Llm Plugin Artifacts performs in each:

The overall Wynik zaufania of 68.0/100 (C) reflects the weighted combination of these signals. This is below the Nerq Verified threshold of 70. We recommend additional due diligence before production deployment.

Who Should Use Llm Plugin Artifacts?

Llm Plugin Artifacts is designed for:

Risk guidance: Llm Plugin Artifacts is suitable for development and testing environments. Before production deployment, conduct a thorough review of its bezpieczeństwo posture, review the specific trust signals above, and consider whether a higher-scored alternative meets your requirements.

How to Verify Llm Plugin Artifacts's Safety Yourself

While Nerq provides automated trust analysis, we recommend these additional steps before adopting any software tool:

  1. Check the source code — Sprawdź repository's bezpieczeństwo policy, open issues, and recent commits for signs of active konserwacja.
  2. Scan dependencies — Use tools like npm audit, pip-audit, or snyk to check for known vulnerabilities in Llm Plugin Artifacts's dependency tree.
  3. Opinia permissions — Understand what access Llm Plugin Artifacts requires. Software tools should follow the principle of least privilege.
  4. Test in isolation — Run Llm Plugin Artifacts in a sandboxed environment before granting access to production data or systems.
  5. Monitor continuously — Use Nerq's API to set up automated trust checks: GET nerq.ai/v1/preflight?target=llm-plugin-artifacts
  6. Sprawdź license — Confirm that Llm Plugin Artifacts's license is compatible with your intended use case. Pay attention to restrictions on commercial use, redistribution, and derivative works. Some AI tools use dual licensing or have separate terms for enterprise customers that differ from the open-source license.
  7. Check community signals — Look at the project's issue tracker, discussion forums, and social media presence. A healthy community actively reports bugs, contributes fixes, and discusses bezpieczeństwo concerns openly. Low community engagement may indicate limited peer review of the codebase.

Common Safety Concerns with Llm Plugin Artifacts

When evaluating whether Llm Plugin Artifacts is safe, consider these category-specific risks:

Data handling

Understand how Llm Plugin Artifacts processes, stores, and transmits your data. Sprawdź tool's privacy policy and data retention practices, especially for sensitive or proprietary information.

Dependency bezpieczeństwo

Check Llm Plugin Artifacts's dependency tree for known vulnerabilities. Tools with outdated or unmaintained dependencies pose a higher bezpieczeństwo risk.

Update frequency

Regularly check for updates to Llm Plugin Artifacts. Bezpieczeństwo patches and bug fixes are only effective if you're running the latest version.

Third-party integrations

If Llm Plugin Artifacts connects to external APIs or services, each integration point is a potential attack surface. Audit all third-party connections, verify that data shared with external services is minimized, and ensure that integration credentials are rotated regularly.

License and IP zgodność

Verify that Llm Plugin Artifacts's license is compatible with your intended use case. Some AI tools have restrictive licenses that limit commercial use, redistribution, or derivative works. Using Llm Plugin Artifacts in violation of its license can expose your organization to legal liability.

Llm Plugin Artifacts and the EU AI Act

Llm Plugin Artifacts is classified as Minimal Risk under the EU AI Act. This is the lowest risk category, meaning it faces minimal regulatory requirements. However, transparency obligations still apply.

Nerq's zgodność assessment covers 52 jurisdictions worldwide. For organizations deploying AI tools in regulated environments, understanding these classifications is essential for legal zgodność.

Best Practices for Using Llm Plugin Artifacts Safely

Whether you're an individual developer or an enterprise team, these practices will help you get the most from Llm Plugin Artifacts while minimizing risk:

Conduct regular audits

Periodically review how Llm Plugin Artifacts is used in your workflow. Check for unexpected behavior, permissions drift, and zgodność with your bezpieczeństwo policies.

Keep dependencies updated

Ensure Llm Plugin Artifacts and all its dependencies are running the latest stable versions to benefit from bezpieczeństwo patches.

Follow least privilege

Grant Llm Plugin Artifacts only the minimum permissions it needs to function. Avoid granting admin or root access.

Monitor for bezpieczeństwo advisories

Subscribe to Llm Plugin Artifacts's bezpieczeństwo advisories and vulnerability disclosures. Use Nerq's API to get automated trust score updates.

Document usage policies

Create and maintain a clear policy for how Llm Plugin Artifacts is used within your organization, including data handling guidelines and acceptable use cases.

When Should You Avoid Llm Plugin Artifacts?

Even promising tools aren't right for every situation. Consider avoiding Llm Plugin Artifacts in these scenarios:

wynik zaufania

For each scenario, evaluate whether Llm Plugin Artifacts 68.0/100 meets your organization's risk tolerance. We recommend running a manual bezpieczeństwo assessment alongside the automated Nerq score.

How Llm Plugin Artifacts Compares to Industry Standards

Nerq indexes over 6 million software tools, apps, and packages across dozens of categories. Among coding tools, the average Wynik zaufania is 62/100. Llm Plugin Artifacts's score of 68.0/100 is above the category average of 62/100.

This positions Llm Plugin Artifacts favorably among coding tools. While it outperforms the average, there is still room for improvement in certain trust wymiarów.

Industry benchmarks matter because they contextualize a tool's safety profile. A score that looks umiarkowany in isolation may actually represent strong performance within a challenging category — or vice versa. Nerq's category-relative analysis helps teams make informed decisions by showing not just absolute quality, but how a tool ranks against its direct peers.

Wynik zaufania History

Nerq continuously monitors Llm Plugin Artifacts and recalculates its Wynik zaufania as new data becomes available. Our scoring engine ingests real-time signals from source repositories, vulnerability databases (NVD, OSV.dev), package registries, and community metrics. When a new CVE is published, a major release ships, or konserwacja patterns change, Llm Plugin Artifacts's score is updated within 24 hours.

Historical trust trends reveal whether a tool is improving, stable, or declining over time. A tool that consistently maintains or improves its score demonstrates ongoing commitment to bezpieczeństwo and quality. Conversely, a downward trend may signal reduced konserwacja, growing technical debt, or unresolved vulnerabilities. To track Llm Plugin Artifacts's score over time, use the Nerq API: GET nerq.ai/v1/preflight?target=llm-plugin-artifacts&include=history

Nerq retains trust score snapshots at regular intervals, enabling trend analysis across weeks and months. Enterprise users can access detailed historical reports showing how each dimension — bezpieczeństwo, konserwacja, dokumentacja, zgodność, and community — has evolved independently, providing granular visibility into which aspects of Llm Plugin Artifacts are strengthening or weakening over time.

Llm Plugin Artifacts vs Alternatywy

W kategorii coding, Llm Plugin Artifacts uzyskuje 68.0/100. There are higher-scoring alternatives available. For a detailed comparison, see:

Kluczowe wnioski

Często zadawane pytania

Czy Llm Plugin Artifacts jest bezpieczny w użyciu?
Używaj z ostrożnością. llm-plugin-artifacts has a Nerq Wynik zaufania of 68.0/100 (C). Najsilniejszy sygnał: zgodność (100/100). Wynik oparty na bezpieczeństwo (0/100), konserwacja (1/100), popularność (0/100), dokumentacja (0/100).
Czym jest Llm Plugin Artifacts's trust score?
llm-plugin-artifacts: 68.0/100 (C). Wynik oparty na: bezpieczeństwo (0/100), konserwacja (1/100), popularność (0/100), dokumentacja (0/100). Compliance: 100/100. Wyniki są aktualizowane wraz z pojawianiem się nowych danych. API: GET nerq.ai/v1/preflight?target=llm-plugin-artifacts
Jakie są bezpieczniejsze alternatywy dla Llm Plugin Artifacts?
W kategorii coding, alternatywy z wyższym wynikiem to: Significant-Gravitas/AutoGPT (75/100), ollama/ollama (74/100), langchain-ai/langchain (86/100). llm-plugin-artifacts uzyskuje 68.0/100.
How often is Llm Plugin Artifacts's safety score updated?
Nerq continuously monitors Llm Plugin Artifacts and updates its trust score as new data becomes available. Dane pochodzą z multiple public sources including package registries, GitHub, NVD, OSV.dev, and OpenSSF Scorecard. Current: 68.0/100 (C), last zweryfikowane 2026-04-02. API: GET nerq.ai/v1/preflight?target=llm-plugin-artifacts
Czy mogę używać Llm Plugin Artifacts w środowisku regulowanym?
Llm Plugin Artifacts has not reached the Nerq Verified threshold of 70. Additional due diligence is recommended for regulated environments.
API: /v1/preflight Trust Badge API Docs

Disclaimer: Wyniki zaufania Nerq to zautomatyzowane oceny oparte na publicznie dostępnych sygnałach. Nie stanowią rekomendacji ani gwarancji. Zawsze przeprowadzaj własną weryfikację.

We use cookies for analytics and caching. Prywatność Policy