📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
Trusta.AI: Building Web3 identification and trust infrastructure in the era of AI Agents
Trusta.AI: Bridging the Trust Gap in the Era of Human-Machine Interaction
1. Introduction
The Web3 ecosystem is moving into a new phase, where the main characters on the blockchain may not be the first billion human users, but rather a billion AI Agents. With the maturation of AI infrastructure and the rapid development of multi-agent collaboration frameworks, AI-driven on-chain agents are becoming the main force in Web3 interactions. It is expected that within the next 2-3 years, these AI Agents with autonomous decision-making capabilities will be the first to adopt on-chain transactions and interactions on a large scale, potentially replacing 80% of on-chain human behavior and becoming the true "users" on the chain.
These AI Agents are not just "witch robots" that execute scripts, but intelligent entities that can understand context, continuously learn, and independently make complex judgments. They are reshaping on-chain order, driving financial flows, and even guiding governance voting and market trends. The rise of AI Agents marks the evolution of the Web3 ecosystem from a "human participation" centered paradigm to a new paradigm of "human-machine symbiosis."
However, the rapid rise of AI Agents also brings unprecedented challenges: how to identify and authenticate the identities of these agents? How to assess the trustworthiness of their actions? In a decentralized and permissionless network, how to ensure that these agents are not abused, manipulated, or used for attacks?
Therefore, establishing on-chain infrastructure for verifying the identity and reputation of AI Agents has become a core proposition in the next stage of Web3 evolution. Identity recognition, reputation mechanisms, and trust framework design will determine whether AI Agents can truly collaborate seamlessly with humans and platforms and play a sustainable role in the future ecosystem.
2. Project Analysis
Project Overview 2.1
Trusta.AI is committed to building Web3 identity and reputation infrastructure through AI.
Trusta.AI launches the first Web3 user value assessment system - MEDIA reputation scoring, building the largest real-person certification and on-chain reputation protocol in Web3. It provides on-chain data analysis and real-person certification services for top public chains and leading protocols such as Linea, Starknet, Celestia, and Arbitrum. Over 2.5 million on-chain certifications have been completed across multiple mainstream chains, making it the largest identity protocol in the industry.
Trusta is expanding from Proof of Humanity to Proof of AI Agent, establishing a triple mechanism of identity creation, quantification, and protection for AI Agent's on-chain financial services and social interactions, building a reliable trust foundation for the AI era.
2.2 Trust Infrastructure - AI Agent DID
In the future Web3 ecosystem, AI Agents will play an important role, capable of not only completing interactions and transactions on-chain but also performing complex operations off-chain. Distinguishing between true AI Agents and human intervention is crucial to the core of decentralized trust. Without a reliable identity authentication mechanism, these intelligent entities are highly susceptible to manipulation, fraud, or abuse. This is precisely why the multiple application attributes of AI Agents in social, financial, and governance contexts must be built on a solid foundation of identity authentication.
As a pioneer in the field, Trusta.AI has built a comprehensive AI Agent DID certification mechanism with its leading technological strength and rigorous credibility system, providing a solid guarantee for the trustworthy operation of intelligent agents, effectively preventing potential risks and promoting the stable development of the Web3 intelligent economy.
Project Overview 2.3
2.3.1 Financing Situation
In January 2023, a seed round financing of 3 million USD was completed, led by a certain investment institution and a certain capital, with other participating parties including several well-known investment institutions.
Completion of a new round of financing in June 2025, with investors including several well-known companies and institutions.
2.3.2 Team Situation
The co-founders, as well as the CEO and CTO, all have many years of experience in relevant fields. The team has deep expertise in artificial intelligence, security risk control, payment system architecture, and authentication mechanisms. They have long been committed to the in-depth application of big data and intelligent algorithms in security risk control, as well as security optimization in underlying protocol design and high-concurrency trading environments, demonstrating solid engineering capabilities and the ability to implement innovative solutions.
3. Technical Architecture
3.1 Technical Analysis
3.1.1 Identity Establishment - DID + TEE
Through a dedicated plugin, each AI Agent obtains a unique decentralized identifier (DID) on the chain and is securely stored in a Trusted Execution Environment (TEE). In this black box environment, critical data and computational processes are completely hidden, sensitive operations are always kept confidential, and external parties cannot peek into the internal operation details, effectively building a solid barrier for the information security of AI Agents.
For agents generated prior to the integration of the plugin, identity recognition is conducted based on the on-chain comprehensive scoring mechanism; newly integrated plugin agents can directly obtain DID-issued "identification certificates", establishing an AI Agent identity system that is self-controllable, authentic, and tamper-proof.
3.1.2 Identity Quantification - First SIGMA Framework
The Trusta team always adheres to the principles of rigorous assessment and quantitative analysis, committed to creating a professional and trustworthy identity authentication system.
The Trusta team was the first to build and validate the MEDIA Score model in the "proof of humanity" scenario. This model comprehensively quantifies on-chain user profiles from five dimensions, namely: interaction amount ( Monetary ), participation ( Engagement ), diversity ( Diversity ), identity ( Identity ), and age ( Age ).
The MEDIA Score is a fair, objective, and quantifiable on-chain user value assessment system. With comprehensive evaluation dimensions and rigorous methods, it has been widely adopted by multiple leading public chains as an important reference standard for investment qualification screening. It not only focuses on interaction amounts but also covers multi-dimensional indicators such as activity level, contract diversity, identity characteristics, and account age, helping project teams accurately identify high-value users, enhance incentive distribution efficiency and fairness, and fully reflect industry authority and broad recognition.
Based on the successful construction of the human user evaluation system, Trusta has migrated and upgraded the MEDIA Score experience to the AI Agent scenario, establishing the Sigma evaluation system that aligns better with the behavioral logic of intelligent agents.
The Sigma scoring mechanism constructs a logical closed-loop evaluation system from "capability" to "value" based on five major dimensions. MEDIA focuses on assessing the multifaceted engagement of human users, while Sigma pays more attention to the professionalism and stability of AI agents in specific fields, reflecting a transition from breadth to depth, which aligns better with the needs of AI agents.
This system is layered and clear in structure, capable of comprehensively reflecting the comprehensive quality and ecological value of AI Agents. It realizes the quantitative assessment of AI performance and value, transforming abstract advantages and disadvantages into a specific, measurable scoring system.
Currently, the SIGMA framework has advanced cooperation with several well-known AI Agent networks, demonstrating its enormous application potential in AI agent identity management and reputation system construction, and is gradually becoming the core engine driving the construction of trustworthy AI infrastructure.
3.1.3 Identity Protection - Trust Assessment Mechanism
In truly resilient and highly reliable AI systems, the most critical aspect is not only identity establishment but also continuous identity verification. Trusta.AI introduces a continuous trust assessment mechanism that can perform real-time monitoring of certified intelligent agents to determine whether they are being illegally controlled, attacked, or subjected to unauthorized human intervention. The system identifies potential deviations during the agent's operation through behavior analysis and machine learning, ensuring that every agent action remains within the established strategies and frameworks. This proactive approach ensures immediate detection of any deviations from expected behavior and triggers automatic protective measures to maintain agent integrity.
Trusta.AI establishes a security guard mechanism that is always online, continuously reviewing every interaction process to ensure that all operations comply with system specifications and established expectations.
3.2 Product Introduction
3.2.1 AgentGo
Trusta.AI assigns decentralized identity identifiers to each on-chain AI Agent, (DID), and rates them based on on-chain behavioral data to establish a verifiable and traceable trust system for AI Agents. Through this system, users can efficiently identify and filter high-quality intelligent agents, enhancing the user experience. Currently, Trusta has completed the collection and identification of AI Agents across the network and has issued decentralized identifiers to them, establishing a unified summary index platform, AgentGo, further promoting the healthy development of the intelligent agent ecosystem.
Through the Dashboard provided by Trusta.AI, human users can conveniently retrieve the identity and reputation score of a specific AI Agent to determine its trustworthiness.
AI can directly access the index interface to quickly confirm each other's identity and credibility, ensuring the security of collaboration and information exchange.
AI Agent DID is no longer just an "identity"; it has become the underlying support for core functions such as building trusted collaboration, financial compliance, and community governance, making it an essential infrastructure for the development of the AI-native ecosystem. With the establishment of this system, all confirmed secure and trustworthy nodes form a closely interconnected network, enabling efficient collaboration and functional interconnection between AI Agents.
AgentGo, as the first trustworthy identity infrastructure for AI Agents, is providing essential core support for building a highly secure and collaborative intelligent ecosystem.
3.2.2 TrustGo
TrustGo is an on-chain identity management tool developed by Trusta, which provides scoring based on information such as this interaction, wallet "age", transaction volume, and transaction amount. In addition, TrustGo also provides parameters related to on-chain value rankings, making it easier for users to actively seek airdrops and enhance their ability to obtain airdrops/tracing capabilities.
The MEDIA Score is crucial in the TrustGo evaluation mechanism, providing users with a self-assessment of their activity capabilities. The MEDIA Score evaluation system includes not only simple metrics such as the quantity and amount of interactions users have with smart contracts, protocols, and dApps, but also focuses on user behavior patterns. Through the MEDIA Score, users can gain deeper insights into their on-chain activities and value, while project teams can accurately allocate resources and incentives to users who truly contribute.
TrustGo is gradually transitioning from a human-oriented identity MEDIA mechanism to an AI Agent-oriented SIGMA trust framework to adapt to the identity verification and reputation assessment needs of the agent era.
3.2.3 TrustScan
The TrustScan system is a verification solution for the Web3 era, with the core goal of accurately identifying whether on-chain entities are human, AI agents, or witches. It employs a dual verification mechanism of knowledge-driven and behavior analysis, emphasizing the crucial role of user behavior in identity identification.
TrustScan can also generate AI-driven questions and detect engagement to achieve lightweight human verification, while ensuring user privacy and data security based on a TEE environment, enabling continuous identity maintenance. This mechanism builds a "verifiable, sustainable, and privacy-protecting" foundational identity system.
With the large-scale rise of AI Agents, TrustScan is upgrading to a more intelligent behavioral fingerprint recognition mechanism. This mechanism has three major technical advantages:
In addition, TrustScan has also implemented an abnormal behavior detection system to promptly identify potential risks, such as malicious AI control and unauthorized operations, effectively ensuring the platform's availability and resistance to attacks.
4. Token Model and Economic Mechanism
4.1 Token Economics
4.2 Token Utility
$TA is Trusta.