TrustAgent
MarketplaceHardwareEnterprisePricingHow ToDocs
Sign In
MarketDashboardSavedPricingAccount
Trust Agent shieldTrustAgent

The sovereign AI marketplace and trust layer for audited role agents, working agents, and specialist skills.

Product

RolesSkillsHardwarePricing

Developers

How ToAPI referenceSDK

Programs

Creator ProgramAuditor ProgramEnterpriseFoundationNHS and Public Sector

Company

AboutDocumentationBlogContactPress

Legal

TermsPrivacyCookiesAUPAll legal
© 2026 Trust Agent·info@trust-agent.ai·Audit-first. Provenance-aware. Sovereign by design.
Patent Pending: MWI-PA-2026-002, MWI-PA-2026-003, MWI-PA-2026-004·Built in the United Kingdom — AI Growth Zone

Introduction

  • About Trust Agent
  • Quickstart

Vision and product

  • Whitepaper
  • Financial paper
  • Pitch deck
  • Team

Build with Trust Agent

  • API reference
  • SDK and contracts

Support

  • FAQ
  • Contact us

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Acceptable Use Policy
  • End User Licence (EULA)
  • Data Processing Addendum
  • Creator Agreement
  • Auditor Code of Conduct
  • AI Output Disclaimer

Legal

AI Output Disclaimer

Last updated: 6 May 2026 - AgentCore LTD - Company No. 17114811

Draft. This document is an AI-drafted starting point pending counsel review. The operative terms remain those agreed in your subscription contract until this draft is countersigned.

1. Read This First

Trust Agent’s listings (Agents, Roles, and Skills) generate text using large language models (LLMs). LLMs sometimes produce output that is incorrect, outdated, biased, fabricated, or otherwise misleading. The audit certificate proves a listing’s prompt and behaviour spec match what was reviewed - it does not warrant the truthfulness of any individual reply.

You remain responsible for verifying any output before relying on it for a decision that has financial, legal, medical, safety, or reputational consequences.

2. Not Professional Advice

Nothing produced by a Trust Agent listing constitutes:

  • Medical advice - LLM output is not a substitute for diagnosis, prescription, or treatment by a qualified clinician. If you face a medical emergency, call your local emergency number.
  • Legal advice - LLM output is not a substitute for advice from a solicitor, barrister, attorney, or other qualified legal professional admitted in your jurisdiction.
  • Financial advice - LLM output is not a substitute for advice from an FCA-authorised (or equivalent) financial advisor. Trust Agent is not authorised by the FCA and does not provide investment recommendations.
  • Tax advice - Always consult a qualified accountant or tax advisor for jurisdiction-specific tax planning.
  • Mental-health diagnosis or treatment - Companion or wellbeing roles are not therapists. If you are in crisis, contact your local crisis line (Samaritans 116 123 in the UK).

3. Hallucinations and Factual Errors

LLMs can confidently generate plausible-sounding statements that are factually wrong - this is sometimes called “hallucination”. It can affect names, dates, citations, code, statistics, and quoted text. The audit pipeline checks the listing’s prompt and behaviour spec for known unsafe patterns; it cannot verify every fact in every reply.

For citations, code, or any output you intend to publish or rely on, you must verify the content against an authoritative source.

4. Bias and Fairness

LLMs reflect biases in their training data. Outputs may favour certain languages, cultures, perspectives, or demographic groups. The Trust Agent audit pipeline includes content-safety checks (SC-052 to SC-056) that flag overtly discriminatory language, but subtler bias may persist. Use AI-generated content with critical judgment, especially in hiring, lending, education, and other consequential decisions.

5. Confidentiality of Inputs

What you type into a session may be sent to a third-party LLM provider (Anthropic, OpenAI, Google, Mistral, Groq, or your own self-hosted model). Trust Agent does not log your message content, but the upstream provider may retain it under their own retention policy. Avoid sending non-public secrets, personal data of others without consent, or regulated data (health, payment, government identifiers) unless you have verified the upstream provider’s compliance posture.

6. Watermarking and Provenance

Every audited listing carries a watermark line (visible at the bottom of role profiles) and a signed audit certificate at /audits/{id}. The certificate references the exact artefactHash that was reviewed. If a listing is updated after audit, a new audit must be completed before the new version can carry the badge.

Use the audit certificate to confirm the listing you are using is the one that was reviewed. A high trust score is a signal of audit rigour, not a guarantee of correctness.

7. Third-Party Models

If you bring your own LLM key or connect a local model (Ollama, LM Studio, Jan, llamafile), the Trust Agent audit applies only to the listing’s prompt and behaviour spec, not to the third-party model’s output. The third-party model’s safety posture is the responsibility of its operator (you, in the case of self-hosted models; the cloud provider in the case of BYO-key).

8. No Liability for AI Output

To the maximum extent permitted by law, AgentCore LTD is not liable for any loss or damage arising from your reliance on AI-generated content. This applies whether the loss is direct, indirect, consequential, financial, reputational, medical, or otherwise. Your remedies under this disclaimer are limited as set out in our Terms of Service.

9. Reporting Harmful Output

If you encounter output that is illegal, unsafe, or in clear breach of our Acceptable Use Policy, report it to info@trust-agent.ai with the listing slug and a verbatim copy of the output. We will investigate and may revoke the listing’s audit certificate.

10. Contact Us

AgentCore LTD, 20 Wenlock Road, London, England, N1 7GU. Email: info@trust-agent.ai.