đĄBuild Trust for AI Agents
Design and Build Trust Registries for AI Agents
Last updated
Was this helpful?
Design and Build Trust Registries for AI Agents
Last updated
Was this helpful?
Trust Registries play a critical role in establishing trust for AI Agents. Acting as authoritative databases, they can verify the AI Agent is accredited and authorised to carry out certain actions, ensuring transparency and accountability.
Hereâs how they support the AI ecosystem:
Accreditation of AI Agents: Trust Registries maintain verified records of AI agents, ensuring that they are accredited by trusted organisations or ecosystems. Through accreditations, these trust registries validate that AI agents are legitimate and scope the permissions under which the AI agent can operate within.
Transparency: Trust Registries enable users and organisations to query and verify the origins and accreditations of AI systems they engage with. By providing a publicly accessible record of trusted AI agents, Trust Registries empower stakeholders to assess the credibility and history of an AI system before utilising it. This enhances confidence in the system, especially when the AIâs decisions impact sensitive areas like personal data or legal outcomes.
Governance: Trust Registries also serve as a governance tool, ensuring that AI developers and platforms are held accountable for their actions. By maintaining a registry of accredited AI systems, these registries can track the ongoing compliance of AI agents, making it easier to enforce ethical standards and regulatory requirements. In the event of a failure or harm caused by an AI agent, Trust Registries offer a clear point of reference for auditing and resolving accountability issues.
cheqd has developed a robust Trust Registry solution, enabling users to establish hierarchical chains of trust, with each registry entry being DID-resolvable for enhanced transparency and security. cheqd supports various Trust Registry Data Models, leveraging its versatile DID and DID-Linked Resource architecture.
cheqd's Trust Registry model is predicated on the notion of a trust hierarchy, which is conceptually very similar to traditional Public Key Infrastructure (PKI). Specifically, the model relies on a Root of Trust from which trusted relationships can be established.
In our model for AI Agents, each organisation in the trust hierarchy is able to issue Verifiable Accreditations to other entities, conveying a set of permissions or scopes that determine what the recipient entity is permitted to do.
The following diagram shows an example of how an AI Agent Creator can accredit two AI Agents lower in the hierarchy:
Through this type of relationship, an AI Agent can prove that it is accredited by an AI Agent Creator through presenting the Verifiable Accreditation, which is stored on the cheqd blockchain.
Similarly, an AI Agent creator can prove that it is also trustworthy, demonstrating that it is a real actor rather than a fraudulent actor. In the diagram below, a Governance Authority (such as an accreditation body for AI Agent Creators) can accredit AI Agent Creators directly.
Therefore, relying parties can query the accreditations of AI Agents all the way back to a Root of Trust.
Verifiable Accreditations may be issued between two different entities. Each Accreditation must include the DID of the issuer and the DID of the subject, as well as a set of claims or permissions.
An example of a Verifiable Accreditation can be found below:
This accreditation gives permissions for the subject to issue accreditations of a particular schema, AI Agent Authorization. This is a Root Accreditation which references a Governance Framework, under which AI Agent Creators may be accredited or authorised.
Following this accreditation, the AI Agent Creator can issue the AI Agent a Verifiable Attestation conforming to the schema, for example:
The Credential above lists some of the core metadata and information that defines the AI Agent. The AI Agent may be prompted to return this credential, and a relying party will be able to trust that the AI is trustworthy, and that it comes from a legitimate/accredited Creator.