Er Learning Engineer Agent sikker?
Learning Engineer Agent — Nerq Trust Score 65.8/100 (Karakter C). Baseret på analyse af 5 tillidsdimensioner vurderes det som generelt sikkert men med visse bekymringer. Sidst opdateret: 2026-04-05.
Brug Learning Engineer Agent med forsigtighed. Learning Engineer Agent er en software tool med en Nerq Tillidsscore på 65.8/100 (C), based on 5 uafhængige datadimensioner. Under Nerqs verificerede tærskel Sikkerhed: 0/100. Vedligeholdelse: 1/100. Popularitet: 0/100. Data hentet fra flere offentlige kilder herunder pakkeregistre, GitHub, NVD, OSV.dev og OpenSSF Scorecard. Sidst opdateret: 2026-04-05. Maskinlæsbare data (JSON).
Er Learning Engineer Agent sikker?
CAUTION — Learning Engineer Agent has a Nerq Trust Score of 65.8/100 (C). Har moderat tillidssignaler, men viser nogle bekymrende områder that warrant attention. Suitable for development use — review sikkerhed and vedligeholdelse signals before production deployment.
Hvad er Learning Engineer Agents tillidsscore?
Learning Engineer Agent har en Nerq Trust Score på 65.8/100 med karakteren C. Denne score er baseret på 5 uafhængigt målte dimensioner, herunder sikkerhed, vedligeholdelse og community-adoption.
Hvad er de vigtigste sikkerhedsresultater for Learning Engineer Agent?
Learning Engineer Agents stærkeste signal er overholdelse på 92/100. Ingen kendte sårbarheder er fundet. It has not yet reached the Nerq Verified threshold of 70+.
Hvad er Learning Engineer Agent og hvem vedligeholder det?
| Udvikler | sudhirnagendragupta |
| Kategori | Education |
| Kilde | https://github.com/sudhirnagendragupta/learning-engineer-agent |
| Frameworks | langchain · anthropic |
| Protocols | mcp · rest |
Lovgivningsmæssig overholdelse
| EU AI Act Risk Class | MINIMAL |
| Compliance Score | 92/100 |
| Jurisdictions | Assessed across 52 jurisdictions |
Populære alternativer i education
What Is Learning Engineer Agent?
Learning Engineer Agent is a software tool in the education category: AI-powered multi-agent system for automated course development.. Nerq Trust Score: 66/100 (C).
Nerq independently analyzes every software tool, app, and extension across multiple trust signals including sikkerhed vulnerabilities, vedligeholdelse activity, license overholdelse, and fællesskabsadoption.
How Nerq Assesses Learning Engineer Agent's Safety
Nerq's Trust Score is calculated from 13+ independent signals aggregated into five dimensioner. Here is how Learning Engineer Agent performs in each:
- Sikkerhed (0/100): Learning Engineer Agent's sikkerhed posture is poor. This score factors in known CVEs, dependency vulnerabilities, sikkerhed policy presence, and code signing practices.
- Vedligeholdelse (1/100): Learning Engineer Agent is potentially abandoned. We track commit frequency, release cadence, issue response times, and PR merge rates.
- Documentation (1/100): Documentation quality is insufficient. This includes README completeness, API dokumentation, usage examples, and contribution guidelines.
- Compliance (92/100): Learning Engineer Agent is broadly compliant. Assessed against regulations in 52 jurisdictions including the EU AI Act, CCPA, and GDPR.
- Community (0/100): Community adoption is limited. Baseret på GitHub stars, forks, download counts, and ecosystem integrations.
The overall Trust Score of 65.8/100 (C) reflects the weighted combination of these signals. This is below the Nerq Verified threshold of 70. We recommend additional due diligence before production deployment.
Who Should Use Learning Engineer Agent?
Learning Engineer Agent is designed for:
- Developers and teams working with education tools
- Organizations evaluating AI tools for their stack
- Researchers exploring AI capabilities in this domain
Risk guidance: Learning Engineer Agent is suitable for development and testing environments. Before production deployment, conduct a thorough review of its sikkerhed posture, review the specific trust signals above, and consider whether a higher-scored alternative meets your requirements.
How to Verify Learning Engineer Agent's Safety Yourself
While Nerq provides automated trust analysis, we recommend these additional steps before adopting any software tool:
- Check the source code — Gennemgå repository's sikkerhed policy, open issues, and recent commits for signs of active vedligeholdelse.
- Scan dependencies — Use tools like
npm audit,pip-audit, orsnykto check for known vulnerabilities in Learning Engineer Agent's dependency tree. - Anmeldelse permissions — Understand what access Learning Engineer Agent requires. Software tools should follow the principle of least privilege.
- Test in isolation — Run Learning Engineer Agent in a sandboxed environment before granting access to production data or systems.
- Monitor continuously — Use Nerq's API to set up automated trust checks:
GET nerq.ai/v1/preflight?target=learning-engineer-agent - Gennemgå license — Confirm that Learning Engineer Agent's license is compatible with your intended use case. Pay attention to restrictions on commercial use, redistribution, and derivative works. Some AI tools use dual licensing or have separate terms for enterprise customers that differ from the open-source license.
- Check community signals — Look at the project's issue tracker, discussion forums, and social media presence. A healthy community actively reports bugs, contributes fixes, and discusses sikkerhed concerns openly. Low community engagement may indicate limited peer review of the codebase.
Common Safety Concerns with Learning Engineer Agent
When evaluating whether Learning Engineer Agent is safe, consider these category-specific risks:
Understand how Learning Engineer Agent processes, stores, and transmits your data. Gennemgå tool's privacy policy and data retention practices, especially for sensitive or proprietary information.
Check Learning Engineer Agent's dependency tree for known vulnerabilities. Tools with outdated or unmaintained dependencies pose a higher sikkerhed risk.
Regularly check for updates to Learning Engineer Agent. Sikkerhed patches and bug fixes are only effective if you're running the latest version.
If Learning Engineer Agent connects to external APIs or services, each integration point is a potential attack surface. Audit all third-party connections, verify that data shared with external services is minimized, and ensure that integration credentials are rotated regularly.
Verify that Learning Engineer Agent's license is compatible with your intended use case. Some AI tools have restrictive licenses that limit commercial use, redistribution, or derivative works. Using Learning Engineer Agent in violation of its license can expose your organization to legal liability.
Learning Engineer Agent and the EU AI Act
Learning Engineer Agent is classified as Minimal Risk under the EU AI Act. This is the lowest risk category, meaning it faces minimal regulatory requirements. However, transparency obligations still apply.
Nerq's overholdelse assessment covers 52 jurisdictions worldwide. For organizations deploying AI tools in regulated environments, understanding these classifications is essential for legal overholdelse.
Best Practices for Using Learning Engineer Agent Safely
Whether you're an individual developer or an enterprise team, these practices will help you get the most from Learning Engineer Agent while minimizing risk:
Periodically review how Learning Engineer Agent is used in your workflow. Check for unexpected behavior, permissions drift, and overholdelse with your sikkerhed policies.
Ensure Learning Engineer Agent and all its dependencies are running the latest stable versions to benefit from sikkerhed patches.
Grant Learning Engineer Agent only the minimum permissions it needs to function. Avoid granting admin or root access.
Subscribe to Learning Engineer Agent's sikkerhed advisories and vulnerability disclosures. Use Nerq's API to get automated trust score updates.
Create and maintain a clear policy for how Learning Engineer Agent is used within your organization, including data handling guidelines and acceptable use cases.
When Should You Avoid Learning Engineer Agent?
Even promising tools aren't right for every situation. Consider avoiding Learning Engineer Agent in these scenarios:
- Production environments handling sensitive customer data
- Regulated industries (healthcare, finance, government) without additional overholdelse review
- Mission-critical systems where downtime has significant business impact
For each scenario, evaluate whether Learning Engineer Agent's trust score of 65.8/100 meets your organization's risk tolerance. We recommend running a manual sikkerhed assessment alongside the automated Nerq score.
How Learning Engineer Agent Compares to Industry Standards
Nerq indexes over 6 million software tools, apps, and packages across dozens of categories. Among education tools, the average Trust Score is 62/100. Learning Engineer Agent's score of 65.8/100 is above the category average of 62/100.
This positions Learning Engineer Agent favorably among education tools. While it outperforms the average, there is still room for improvement in certain trust dimensioner.
Industry benchmarks matter because they contextualize a tool's safety profile. A score that looks moderat in isolation may actually represent strong performance within a challenging category — or vice versa. Nerq's category-relative analysis helps teams make informed decisions by showing not just absolute quality, but how a tool ranks against its direct peers.
Trust Score History
Nerq continuously monitors Learning Engineer Agent and recalculates its Trust Score as new data becomes available. Our scoring engine ingests real-time signals from source repositories, vulnerability databases (NVD, OSV.dev), package registries, and community metrics. When a new CVE is published, a major release ships, or vedligeholdelse patterns change, Learning Engineer Agent's score is updated within 24 hours.
Historical trust trends reveal whether a tool is improving, stable, or declining over time. A tool that consistently maintains or improves its score demonstrates ongoing commitment to sikkerhed and quality. Conversely, a downward trend may signal reduced vedligeholdelse, growing technical debt, or unresolved vulnerabilities. To track Learning Engineer Agent's score over time, use the Nerq API: GET nerq.ai/v1/preflight?target=learning-engineer-agent&include=history
Nerq retains trust score snapshots at regular intervals, enabling trend analysis across weeks and months. Enterprise users can access detailed historical reports showing how each dimension — sikkerhed, vedligeholdelse, dokumentation, overholdelse, and community — has evolved independently, providing granular visibility into which aspects of Learning Engineer Agent are strengthening or weakening over time.
Learning Engineer Agent vs Alternativer
In the education category, Learning Engineer Agent scores 65.8/100. There are higher-scoring alternatives available. For a detailed comparison, see:
- Learning Engineer Agent vs Mr.-Ranedeer-AI-Tutor — Trust Score: 73.8/100
- Learning Engineer Agent vs hello-agents — Trust Score: 79.5/100
- Learning Engineer Agent vs owl — Trust Score: 71.3/100
Vigtigste pointer
- Learning Engineer Agent has a Trust Score of 65.8/100 (C) and is not yet Nerq Verified.
- Learning Engineer Agent shows moderat trust signals. Conduct thorough due diligence before deploying to production environments.
- Among education tools, Learning Engineer Agent scores above the category average of 62/100, demonstrating above-average reliability.
- Always verify safety independently — use Nerq's Preflight API for automated, up-to-date trust checks before integration.
Ofte stillede spørgsmål
Er Learning Engineer Agent sikker?
Hvad er Learning Engineer Agents tillidsscore?
What are safer alternatives to Learning Engineer Agent?
How often is Learning Engineer Agent's safety score updated?
Can I use Learning Engineer Agent in a regulated environment?
Se også
Disclaimer: Nerqs tillidsscorer er automatiserede vurderinger baseret på offentligt tilgængelige signaler. De udgør ikke anbefalinger eller garantier. Foretag altid din egen verificering.