Most services today provide no way to objectively verify "what actually happened." Users cannot confirm whether AI decisions, data stored in the cloud, or code that service providers claim to have executed actually worked as promised. They simply have to trust the service provider. This may not be a concern for low-impact applications, but it becomes crucial when economic, privacy, or other trust-related stakes are high.
Blockchain offers verifiability but has limitations that prevent it from processing complex computations due to various constraints including software, hardware, and consensus. EigenCloud addresses this by combining cryptographic verification in hardware-based isolated environments (TEE) with collateral-based economic security (restaking), presenting a "third path" that enables general-purpose computation off-chain while making results verifiable.
EigenCloud consists of four components that support shared security, distributed data storage, isolated execution environments, and deterministic AI inference. Their combination provides verifiable answers about the integrity of computational execution processes and results.
EigenCloud prioritizes developer accessibility. By natively supporting familiar Web2 development patterns (Docker containers, GPU computation, external API calls) and providing CLI tools like DevKit for quick application porting, EigenCloud enables software developers to add verifiability and blockchain integration with little to no smart contract engineering required.
Verifiability is no longer optional but essential. EigenCloud is building infrastructure for the age of AI agents, including inter-agent result verification, A2A payments, and on-chain identity management, while simultaneously creating real use cases across various domains such as prediction markets, reputation systems, cross-chain security, and institutional finance.
Source: Wiktionary
Eigen. A Middle Dutch word meaning "inherent," "original," or "truly one's own."
Sharing roots with the English word "own," it refers to properties that exist wholly unto themselves, independent of external factors.
In mathematics, this word carries special weight. Eigenvalues and eigenvectors are core concepts in linear algebra that reveal the most essential characteristics of complex systems. Matrices represent transformations that stretch, shrink, and rotate space, and most vectors change direction when subjected to these transformations. But eigenvectors are different. They maintain their direction through transformation. Only their magnitude changes; their essential directionality remains intact.
Eigenvalues and eigenvectors reveal what a system fundamentally does. Even the most complex transformation can ultimately be decomposed along the directions of its eigenvectors, and the simple essence hidden within complexity, the true identity of the system, is revealed through eigenvalues and eigenvectors.
The name EigenCloud embodies precisely this philosophy.
What does it take to say something is "truly one's own" in the digital world? For my data stored in the cloud, decisions made by AI on my behalf, or code that a service provider claims to have executed to truly be "mine," merely claiming ownership is not enough. One must be able to verify that it exists as promised and operated as promised.
Without verifiability, ownership is an illusion. If my data is "in" the cloud but I cannot check whether it has been tampered with, can I really call it my data? If AI made a decision on my behalf but I cannot verify that the decision followed the rules I set, can I really call it my decision? If a service provider claims to have executed a promised algorithm but cannot prove that algorithm actually ran, what meaning does that promise hold?
Just as eigenvalues in mathematics reveal a system's essence in a way that cannot be deceived, verifiability must reveal a digital system's essence in a way that cannot be deceived. Only then can something truly become "Eigen," genuinely one's own.
Yet in reality, this "inherent" verifiability is mostly absent. We cannot see the essence of systems; we simply believe the promises written on their surface.
Source: Guardians
In November 2023, a class action lawsuit was filed against UnitedHealth, the largest health insurer in the United States. The core issue was that the company had allegedly used an AI algorithm called "nH Predict" to improperly deny insurance claims for elderly patients. The algorithm analyzed patients' diagnoses, ages, and living situations to "predict" care duration, and insurance payments were cut off at the predicted time regardless of the attending physician's opinion.
What was more shocking was that patients could not find out why their insurance claims were denied. When patients whose coverage was terminated asked for reasons, the company refused to explain, citing confidentiality. The plaintiffs questioned the algorithm's reliability, noting that 90% of appeals were overturned, and added that the company had exploited the fact that only 0.2% of all policyholders filed appeals. The company countered that "nH Predict is only a guide, not a decision."
Whether UnitedHealth's model actually had a high error rate is unknowable. The same goes for how much the model's predictions influenced the company's decisions. This is because the model's decision-making process is entirely unverifiable. Neither patients, nor doctors, nor even regulators have any way to confirm what criteria the algorithm actually used to make its judgments. Consequently, disputes boil down to "the company's claims vs. the patient's claims," relying on legal battles rather than technical evidence.
This is not an issue that can be dismissed as a mere isolated incident. As the AI era accelerates, such "unverifiable judgments" are permeating every corner of our lives.
We store data through cloud services, conduct important transactions on online platforms, and pay enormous sums to digital systems. AI agents autonomously execute trades, handle customer service, and generate content. Yet in all these processes, there are almost no cases where one can objectively prove "what actually happened." Whether the AI service I used really runs on the promised AI model, whether the server is secretly siphoning my personal information, why the AI made a particular decision: all of this ultimately comes down to the question of "do you trust what the service provider says?"
EigenCloud seeks to offer one answer to this problem. It aims to endow current systems with "Eigen," an inherent verifiability.
The goal is to build infrastructure that can prove how complex computations were performed, what inputs produced what outputs, and whether promised code was actually executed. To create a structure that reveals a system's essence in a way that cannot be deceived. Just as eigenvalues in mathematics show the true nature of a matrix, a verifiable cloud shows the true operation of digital services.
When that becomes possible, we can finally say that something in the digital world is "truly our own."
There is one intuitive method to prove "what actually happened." Have everyone re-execute the same computation.
Blockchain works on precisely this principle. All nodes in the network verify the same transaction and must reach the same result for that transaction to be finalized. Anyone can verify execution results, and no central authority can manipulate them.
However, this solution comes at a cost. Having all nodes re-execute all computations means the entire network's processing capacity is limited to the processing capacity of individual nodes. Complex AI inference, large-scale data analysis, GPU-intensive computations, and other high-load operations are impossible or astronomically expensive in blockchain environments.
In conclusion, blockchain is unsuitable for implementing all types of computation. Setting aside the increased computational load from simple re-execution, blockchain has five fundamental constraints:
Software constraints: Developers cannot directly utilize the rich open-source libraries accumulated off-chain. Software must be reimplemented for each virtual machine: EVM, WASM, zkVM, and so on. This means that instead of the more than 20 million software developers worldwide, only about 25,000 crypto developers can build blockchain applications.
Hardware constraints: Blockchain performs computations on hardware with specified specifications, and developers cannot arbitrarily select specialized resources like GPUs, high-performance CPUs, TEEs, or geographically distributed caches for computational convenience.
Interface constraints: Blockchain can only trust data already on-chain or data provided through separately secured oracles. Maintaining strong security guarantees while directly accessing external APIs or real-world state remains an unsolved challenge.
Consensus constraints: Blockchain has predetermined consensus protocols. Developers cannot customize how validator nodes distribute work, communicate, or reach consensus. Implementing new features (data availability, fast finality, privacy, AI inference protocols, etc.) requires creating an entirely new blockchain.
Objectivity constraints: Blockchain can only handle contracts with purely objective conditions. Even zero-knowledge proofs are limited to proving objective conditions. However, most real-world contracts depend on conditions that require human interpretation. Questions like "Is the service quality satisfactory?" or "Does this content violate policy?" cannot be adjudicated on-chain.
Of course, blockchain has continuously evolved to overcome these limitations. Bitcoin proved 21 million as the total supply cap as the first verifiable currency, and Ethereum introduced Turing-complete smart contracts to enable verifiable finance. Subsequently, purpose-specific blockchains and high-performance general-purpose blockchains have emerged, gradually expanding the scope of verifiability.
Yet despite these advances, the fundamental constraints mentioned above remain. There is the fragmentation problem where asset and data movement between chains becomes complex, and the limitation that they cannot provide latency and programmability as low as off-chain computing environments. Ultimately, with current technology, we arrive at the conclusion that complex computations must be performed outside the blockchain, off-chain, and only the results used. The problem is that the moment you go off-chain, you lose the verifiability that blockchain provided, returning to a situation where "you have to trust the service provider."
Developers face a dilemma. Do they choose blockchain, which is verifiable but severely limited in what it can do, or do they choose traditional cloud, which can do anything but is unverifiable?
Source: EigenCloud
EigenCloud offers a perspective that reframes the question itself in response to this dilemma.
Previous discussions focused on "which blockchain is best." Comparing the performance, ecosystem, and network effects of specific chains and predicting winners. But EigenCloud looks elsewhere. Regardless of which chain wins or which application succeeds, there is something commonly needed for them to operate verifiably. That is the verification infrastructure itself.
EigenCloud's role is similar to the change AWS brought to the internet. At that time, companies had to purchase their own servers and lease data centers to run websites, but AWS changed this. By lifting the heavy burden of infrastructure construction, companies could focus solely on application development.
The picture EigenCloud paints is similar. When developers want to create verifiable services, instead of designing verification mechanisms from scratch, they can leverage EigenCloud's infrastructure. Complex computations are performed off-chain, but the fact that results were derived honestly can be proven cryptographically and economically. If existing blockchain could only verify deterministic and objective conditions, EigenCloud extends the scope of verification to "everything two reasonable parties can agree on." Just as cloud made the economy programmable and created over $10 trillion in value, EigenCloud seeks to open the next chapter by making that economy verifiable.
There are many areas where such a verifiable cloud would be useful. It can be applied anywhere transparency creates value: prediction markets, supply chain tracking, reputation systems, and more.
However, AI agents occupy a special position among these. If verifiability is "nice to have" in other domains, for AI agents it is the very condition for existence.
As of 2025, AI agents are no longer science fiction. Autonomous trading bots analyze markets, customer service agents handle inquiries, and content agents write text. But for them to truly take on high-risk, high-value tasks, there is a wall they must overcome.
Imagine an AI agent trading stocks with your funds.
Is the agent really behaving according to the strategy I set?
Did the service provider manipulate the agent's judgment behind the scenes?
Is the AI model the agent uses really the promised model, or has it been replaced with a cheaper, less accurate model to cut costs?
In most current AI agent services, the answers to these questions are unknowable, and users have no choice but to trust the service provider. But trust alone is insufficient in high-risk domains like finance. As seen in the UnitedHealth case from the introduction, an environment demanding unconditional trust in contexts where interests conflict is unlikely to be a sustainable solution.
EigenCloud presents one approach to this problem. It executes complex computations off-chain while making the process and results verifiable through cryptographic and economic mechanisms. The following sections will examine in greater detail exactly how EigenCloud built its verifiable cloud.
Before detailed explanation, let us first survey EigenCloud's overall structure. EigenCloud broadly consists of four components.
EigenLayer is the foundation layer of this structure, recycling assets staked on Ethereum to provide economic security to various services. EigenDA, EigenCompute, and EigenAI are three AVSes (Actively Validated Services) built on this shared security, each responsible for core verifiable cloud functions: data storage, computation execution, and AI inference respectively.
This section will first examine how EigenLayer, the foundation layer, provides shared security, and the next section will cover the specific operations of the three AVSes.
Currently, approximately 35 million ETH is staked on Ethereum to protect the network. Validators deposit this asset as collateral, accepting the risk of being slashed (having funds confiscated) if they behave dishonestly. This economic incentive sustains Ethereum's security.
The problem was that this massive security asset was used only for Ethereum consensus. Developers wanting to build new decentralized services had to recruit their own validator network from scratch and issue their own token to establish economic security. This is an enormously costly and time-consuming task, and security is inevitably weak in the early stages.
EigenLayer introduced the concept of restaking as a solution to this problem. The idea is to reuse ETH already staked on Ethereum for the security of other services as well.
Consider a situation where you have taken out a mortgage using collateral deposited at a bank. Previously, that collateral guaranteed only that one mortgage. Restaking is similar to being able to simultaneously guarantee multiple loans with the same collateral: car loans, business loans, and so on. Of course, there is risk that the collateral will be confiscated if misconduct occurs in any one of them, but the utilization of the collateral increases accordingly.
EigenLayer broadly consists of three types of participants:
Stakers: Participants who deposit ETH, LST (Liquid Staking Token), or EIGEN tokens into EigenLayer. They do not operate nodes directly but instead delegate their assets to trusted operators. Deposited assets back the economic security of AVSes, and stakers earn fee revenue in return. However, if the operator they delegated to commits misconduct, they share the slashing risk.
Operators: Node operators who actually perform verification work for AVSes. Based on stake delegated from stakers, they execute the work required by each AVS. Operators can choose which AVSes to participate in and decide how much stake to allocate as slashable for each AVS.
AVS (Actively Validated Services): Decentralized services built on EigenLayer's shared security. AVSes do not need to issue their own tokens or recruit validator networks from scratch; simply by registering with EigenLayer, they can attract participation from existing operators. In return, AVSes pay service fees to operators and stakers.
Source: EigenLayer
If Ethereum is the foundation, EigenLayer is like erecting a steel framework on that foundation that multiple buildings can share. Each AVS is an individual building going up on that framework, and EigenCloud is an infrastructure package that provides integrated utilities needed to construct buildings: electricity, plumbing, elevators, and so on. If individual AVSes are services solving specific problems, EigenCloud provides infrastructure that these AVSes commonly require. That includes verifiable computing environments, data persistence, dispute resolution, deterministic AI inference, and more. As of 2025, over 40 AVSes are operating on EigenLayer mainnet.
3.2.1 Slashing and Redistribution
To induce honest computation, detection alone is not enough; misconduct must carry consequences.
This is where EigenLayer's economic security comes into play. Operators performing computation on EigenCloud are participants who have deposited ETH or EIGEN tokens as collateral, and if misconduct is confirmed, their collateral is slashed.
The core of the slashing mechanism is Unique Stake Allocation. Operators can specify the percentage of slashable stake for each AVS, anywhere from 0% to 100%. The key point is that a single unit of ETH can only be slashed by a single operator set at any given time. This design mitigates "slashing cascade" risk where problems in one AVS could chain to others. For example, if there are 3 operators in an operator set each staking 100 ETH, and they allocate 10%, 10%, and 20% respectively, the unique stake totals 40 ETH. A malicious attacker wanting to manipulate this system would need to allocate at least 40.1 ETH of unique stake, and if the attack is detected, the entire amount gets slashed.
It is also important that slashing can lead to victim compensation beyond simple punishment. Through its redistribution mechanism, EigenLayer allows slashed funds to be delivered to designated recipients instead of being burned. If a trading bot operator incurred losses due to misconduct, they can receive compensation from the slashed assets. This feature is entirely opt-in: when redistribution is enabled, operators decide whether to accept those terms, and stakers can choose whether to delegate to operators who implement redistribution.
3.2.2 The EIGEN Token and Intersubjective Verification
The EIGEN token is another pillar of this economic enforcement. What makes EIGEN special is that it transcends the fundamental limitations of traditional slashing.
Traditional slashing can only punish "objectively determinable" rule violations. It only works in cases that are mathematically provable on-chain, like double-signing. But many real-world problems are not objective. Questions like "Is this AI model biased?" or "What is the outcome of this prediction market?" cannot be adjudicated by on-chain logic alone.
EIGEN solves this problem with intersubjective verification. Intersubjective verification refers to verification conducted under the principle that "if two reasonable observers can agree on something, it is verifiable." This is not mathematical proof, but precisely because it is not, it extends the scope of verification to areas that can be judged by social consensus. This is the concrete mechanism that makes "everything two reasonable parties can agree on" verifiable.
What makes this possible is EIGEN's dual-token model. EIGEN (ERC-20) is a standard transferable token that can be used in DeFi or traded on exchanges; holders need not participate in fork disputes. In contrast, bEIGEN is the representation obtained when staking EIGEN, a voluntary commitment to bear slashing risk and participate in protocol security. Thanks to this separation, general token holders can enjoy the token's economic value without getting caught up in fork politics, while only stakers bear verification responsibilities and their associated risks.
The fork mechanism works as follows. If a majority of EIGEN stakers make an incorrect judgment (e.g., intentionally mis-settling a prediction market), a challenger can burn a certain amount of EIGEN and create a token fork. Now two versions of EIGEN exist, and users and AVSes choose which version to recognize as "canonical." If the majority chooses the forked version, the assets of malicious stakers holding the original token lose value. When bEIGEN is unstaked, it converts to EIGEN from the socially recognized fork.
The reason this structure works is economic deterrence. If a majority of stakers act maliciously, they must accept the risk that their own assets will become worthless. Forks will rarely actually occur, but the very possibility induces honest behavior.
To understand EigenCloud's structure, one must first grasp its design principles. As examined earlier, EigenCloud is built on EigenLayer's shared security, and the central idea running through it is "decoupling." In existing blockchains, both tokens and application logic are processed on-chain, but EigenCloud separates these two. Core financial functions like escrow, transfer, and slashing remain on-chain, directly utilizing existing blockchain's verifiability, while application logic like business rules, complex computations, and external system integration runs in off-chain environments.
The question is: how can we trust what was executed off-chain? EigenCloud built the following three first-party AVSes based on EigenLayer's shared security to provide the infrastructure needed for verifiable computation.
EigenDA: A distributed data store that answers the question, "Did this data really exist?" It records the inputs and outputs of off-chain computations on a distributed network, making them impossible for any single entity to delete or manipulate.
EigenCompute: A TEE-based (Trusted Execution Environment) computing environment that answers the question, "Was this code executed without manipulation?" It runs Docker containers in a trusted execution environment and generates cryptographic proofs of execution results.
EigenAI: A deterministic AI inference layer that answers the question, "Does the same input produce the same answer?" It provides bit-exact deterministic LLM inference that guarantees identical outputs for identical prompts and models.
These do not exist independently but are connected in a single flow. Code runs in EigenCompute's TEE; if AI inference is needed, EigenAI provides deterministic results; and all inputs, outputs, and proofs are recorded in EigenDA. All of this operates on EigenLayer's shared security, and if misconduct is confirmed, EigenLayer's slashing mechanism is enforced.
In the following sections, let us examine in more detail how each component of EigenCloud operates through a simple scenario.
Imagine you are running an AI trading bot. This bot analyzes market data and makes buy/sell decisions using an AI model. In the conventional approach, you would deploy the bot on AWS and simply have to trust the decisions the bot makes. There would be no way to verify why the bot sold at a particular moment or whether someone manipulated the bot's behavior behind the scenes.
How would it be different on EigenCloud?
Let us start with the most basic question. Is the code the bot is running really the code you deployed? How can you be sure the server administrator did not secretly modify the code?
EigenCompute is an AVS designed to solve this problem, performing verification based on TEE. Simply put, TEE is a hardware-based security system that encrypts specific memory regions or execution environments at the hardware level, completely isolating them from the operating system or server administrators. Code executed in TEE's isolated space and data stored there cannot be viewed or interfered with from outside. Additionally, TEE provides remote attestation that proves integrity by communicating with the hardware manufacturer's server. This attestation cryptographically guarantees that "this specific code, with this specific input, was executed in an isolated environment and produced this output."
TEE-based verification offers the advantage that "you don't have to trust the service provider," but it presupposes trust in hardware manufacturers (Intel, AMD, AWS, etc.). The validity of remote attestation depends on the assumption that the manufacturer's servers respond honestly. However, this is considered a practical security improvement because hardware manufacturers bear broader reputational risk than individual cloud service providers, and using TEEs from multiple manufacturers in parallel can mitigate single points of failure.
The actual flow works like this:
Package the bot's code as a Docker image and register it with EigenCompute; the image hash is recorded on-chain.
When an execution request comes in, a node equipped with TEE loads that image and executes it in an isolated environment.
When execution completes, the TEE generates remote attestation including hashes of code, input values, and output values along with timestamps. After remote attestation through the hardware manufacturer's server, a receipt is generated.
Anyone can verify the proof through the remote attestation receipt, confirming EigenCompute's computational integrity.
From a developer's perspective, this is important because it can lower the barrier to entry compared to existing blockchain-based computation. Modern TEEs like Intel TDX and AWS Nitro impose almost no software-level constraints, so they natively support features that modern software takes for granted: Docker containers, GPU computation, external API calls, and so on. This contrasts with zero-knowledge proofs, which require generating special circuits for computation verification and have significantly limited complexity.
EigenCompute launched its mainnet alpha in October 2025 and supports developers in quickly porting existing applications to verifiable environments through a CLI tool called DevKit. Additionally, through a task-based execution framework called Hourglass, it standardizes how to define, deploy, execute, and verify compute tasks across a distributed operator network.
The EigenCompute examined earlier allows users to verify whether there was intervention by an intermediary during code execution and whether the intended program matches the executed program. However, AI models present several additional challenges due to the complexity of inference, and EigenAI is an AVS designed to address these issues.
Prompt Modification: Developers carefully engineer prompts to obtain desired responses. If the prompt is modified in any way, the integrity of this context engineering is compromised, causing the agent to execute unintended behaviors.
Response Modification: In high-stakes agents, each action based on an LLM response can have extremely broad implications, whether economic or otherwise. In such cases, ensuring each response is tamper-proof is essential.
Model Modification: There is no verifiable guarantee that the model served by existing providers is actually the model expected or paid for. Lighter models could be used to reduce infrastructure costs, and agents requiring specific levels of reasoning capability or tool-calling functionality risk not operating as intended if model consistency is not guaranteed.
Currently, there is no way to verify whether these things are happening in most AI services, but EigenCloud solves this problem through a bit-exact deterministic execution mechanism. The EigenCloud team announced that after analyzing various layers of the computing stack, from GPU types and CUDA kernels to inference engines and token generation methods, they achieved deterministic execution of LLM inference at GPU scale. Given the same prompt, model, and seed, identical output is always generated.
Let us examine in detail why deterministic AI inference is difficult and how EigenAI solves it. Generally, GPU computation is non-deterministic. During parallel processing, if the order of floating-point operations differs, rounding errors can accumulate and the final result can differ slightly. The EigenAI team performed the following stack-wide optimizations to solve this problem:
Token generation layer: Fixed seed sampling, deterministic decoding
Inference engine layer: Fixed operation order, standardized batch processing
CUDA kernel layer: Deterministic kernel selection, forced synchronization
Hardware layer: Requires identical GPU architecture
As a result of these optimizations, EigenAI guarantees bit-identical output for the same (model, prompt, seed) combination regardless of which node executes it.
Once this becomes possible, the logic of verification becomes simple. If someone claims that prompt X and model Y produced output Z, just re-execute with the same X and Y. If the result matches Z, it operated correctly; if different, that discrepancy itself is cryptographic evidence of misconduct. Without a separate complex proof system, reproducibility itself becomes the verification mechanism.
So far, through EigenCompute and EigenAI, we have confirmed code integrity and that AI judgments are also reproducible. But one prerequisite remains for this. The input values at the time of execution must be preserved somewhere. If the bot's inputs and outputs are stored only in the service provider's database, verification itself becomes impossible if the provider deletes or modifies records.
EigenDA stores these records on a distributed network. Data is split into multiple pieces and stored on different nodes, making it impossible for any single entity to delete or manipulate. Operators sign proofs that they are holding the data, and these signatures are aggregated and recorded on Ethereum.
4.4.1 EigenDA's Technical Approach
There are several ways to solve the data availability problem. The simplest approach is replication. If you redundantly store the same data on multiple nodes, you can recover data even if some nodes fail. However, this approach has low storage efficiency. If you want 3x fault tolerance, you need 3x storage space.
Modular data availability layers like Celestia adopted a 2D Reed-Solomon technique that arranges data in a two-dimensional matrix of rows and columns, then applies erasure coding to each row and column. This approach has the advantage that light nodes can probabilistically verify data availability through random sampling alone, but full nodes still had to download and store all data, leaving the limitation of duplicate storage of identical data within the committee.
EigenDA adopts a 1D Reed-Solomon technique, eliminating the duplicate storage problem. It mathematically encodes original data and divides it into multiple pieces, but only a subset of all pieces is needed to perfectly recover the original. For example, any 50 out of 100 pieces can restore the original data. In this approach, each node stores different unique pieces, maintaining high recoverability without the same data being stored twice.
Each piece's integrity is cryptographically verified through KZG polynomial commitments. Each node comprising EigenDA can mathematically verify that the piece it received was correctly encoded and is a legitimate part of the original data.
According to EigenDA's official documentation, thanks to this design, total data transmission volume stays within 10 times the theoretical minimum. In contrast, the gossip protocols used by existing blockchains increase transmission volume proportionally to the number of validators and full nodes, potentially exceeding 100x.
4.4.2 The Encoding Pipeline and GPU Acceleration
Let's take a closer look at how Reed-Solomon and KZG actually work in practice.
EigenDA's encoder takes a single blob of data as input and produces 8,192 frames as output. Each frame consists of a chunk paired with a proof. The chunk is the actual data fragment that validators store, while the proof is cryptographic evidence confirming that the chunk is a legitimate part of the original data. This allows validators to verify that the fragment they hold is genuine without downloading the entire dataset.
The encoding process is divided into two stages. First, Reed-Solomon encoding expands the original data by a factor of eight and splits it into 8,192 chunks. This step uses NTT, a variant of the Fast Fourier Transform (FFT). Next, KZG multiproof generation creates a cryptographic proof for each chunk.
The challenge is that this process is computationally intensive. Processing a single 16MB blob requires millions of complex mathematical operations, and the elliptic curve arithmetic used in KZG proof generation consumes most of the total encoding time.
EigenDA addressed this bottleneck through GPU acceleration. Just as graphics cards process countless pixels simultaneously in video games, elliptic curve operations are well-suited for parallel processing. EigenDA leveraged Ingonyama's ICICLE library to offload the heaviest computations to the GPU.
However, not all operations were moved to the GPU. The transform operations in the Reed-Solomon path consist of 8,192 small independent computations, each handling only 512 elements. For these, the overhead of launching GPU kernels actually creates inefficiency, as CPUs handle such workloads more efficiently. In other words, only the large-scale operations where GPUs truly excel were selectively accelerated.
An additional optimization dynamically adjusts the computational workload based on actual data size. Previously, even a 1MB blob was processed using a matrix sized for the maximum 16MB, resulting in unnecessary calculations. By computing only the portions filled with actual data and padding the rest afterward, EigenDA achieved an additional 8x speedup for typical blob sizes.
As a result, encoding a 16MB blob now completes in approximately 1.26 seconds, enabling sustained throughput of over 100MB/s. For comparison, CPU-only encoding takes about 8.2 seconds for just a 128KB blob, which is 56 times slower on a per-byte basis than the GPU-accelerated approach.
4.4.3 Cloud-Level Throughput
EigenDA has achieved performance in a different league from other data availability solutions through various performance improvements. V2 (codename Blazar), which improved on the initial version and was deployed in July, achieved throughput of 100 MB/s, already at least 75 times higher than competitors.
Source: EigenCloud
Not stopping there, in December 2025, EigenDA achieved cloud-level data throughput of 1 GB per second through its proprietary database, LittDB.
To achieve 1GB/s throughput, EigenDA had to go beyond the limitations of existing databases. Initially, the EigenDA team used LevelDB, a commonly used database, but EigenDA's unique workload caused problems of excessive memory consumption and lower-than-normal throughput. The EigenDA team therefore developed its own database, LittDB, to pursue performance improvements.
LittDB sought to address existing problems through a "zero-copy" design. Once data is written to disk, it never moves or copies until it is deleted or its designated time expires. This design successfully eliminated the unnecessary capacity occupation problems that LevelDB had, improving write performance by 1,500x.
This choice comes with trade-offs. LittDB does not support data modification, manual deletion, transactions, complex queries, replication, compression, or encryption. However, EigenDA's unique workload does not require these features, so by boldly removing them, it achieved performance that far exceeds general-purpose databases in the functions it needs.
Let us return to the trading bot. The complete flow of this bot operating on EigenCloud is as follows.
The bot's code runs inside EigenCompute's TEE. If an AI model is needed for market analysis, EigenAI performs deterministic inference. All inputs, outputs, and proofs are recorded in EigenDA. This entire process is backed by assets staked on EigenLayer.
If questions arise about a specific trade later, you can retrieve the records from that point in time from EigenDA and re-execute with the same code and same inputs. Since EigenAI is deterministic, AI judgments are also reproduced identically. If results match, it is normal; if different, manipulation occurred. When manipulation is confirmed, slashing is triggered, and victims receive compensation through redistribution.
In summary, EigenLayer guarantees the economic cost of misconduct through restaking-based shared security and slashing; EigenDA stores all records in a distributed manner, making evidence destruction impossible; EigenCompute guarantees that code runs in an isolated environment without manipulation; and EigenAI guarantees the same results upon re-execution through deterministic AI inference. These four elements combine to form the "verifiable cloud."
The EigenCloud ecosystem operates with two flywheels.
The first is the Shared Security Flywheel. As more ETH is restaked, stronger economic security is provided; more AVSes want to utilize this security; the rewards paid by AVSes increase; and more stakers participate in restaking. This virtuous cycle has grown the EigenLayer ecosystem.
The second is the Shared Distribution Flywheel that EigenCloud adds. As demand from apps and agents for EigenCompute/EigenAI/EigenDA increases, more AVSes want to provide services to these apps; as the AVS ecosystem becomes richer, the verifiable services available for app developers to choose from increase; more powerful apps become possible; and app demand increases again.
The two flywheels reinforce each other. The Shared Security Flywheel provides the trust foundation for AVSes, and the Shared Distribution Flywheel provides the demand foundation for AVSes.
This flywheel structure holds particularly important meaning in the agent ecosystem. In a world where agents collaborate and transact with each other, demand for verifiable infrastructure will grow exponentially.
For both flywheels to spin continuously, participant incentives must be properly aligned. In December 2025, Eigen Labs and the Eigen Foundation proposed ELIP-12 that aims to establish a new Incentives Committee. The core idea is to direct token emissions toward participants who actually create value, and to channel generated fees back to EIGEN token holders.
Specifically, a 20% fee on AVS rewards will be charged on stake subsidized by EIGEN incentives, and 100% of EigenCloud fees (from EigenDA, EigenCompute, and EigenAI) net of operator expenses will flow into a fee contract. Accumulated fees will be used for buybacks, reducing EIGEN's circulating supply and creating deflationary pressure. Furthermore, rewards will be concentrated on productive stake rather than idle stake, and only fee-paying AVSs will be eligible for incentives.
These tokenomic changes serve as fuel for both flywheels. Concentrating rewards on productive stake attracts more slashable capital, the recycling of cloud fees encourages service usage, and buybacks support token value; forming a virtuous cycle.
So where will the demand that most powerfully accelerates these two flywheels come from?
The three AVSes of EigenCloud examined earlier, EigenCompute, EigenDA, and EigenAI, were designed for verification of individual computations. But this structure holds greater possibilities. In a world where agents collaborate, transact, and delegate to each other, these can become the foundational infrastructure for the agent economy.
In the human economy, trust is formed through brands, reputation, legal recourse, and social relationships, but agents lack these mechanisms. Agents cannot see the other party's face, cannot ask about their history, and cannot go to court if a contract is breached. Verifiability is not "nice to have" for agents but rather a "precondition for transactions."
EigenCloud has a structure capable of meeting this precondition. It can verify another agent's output with TEE proofs, reproduce the judgment process with deterministic AI, leave all records in distributed storage, and impose economic sanctions for misconduct. The verifiability of individual agents naturally extends to the verifiability of inter-agent collaboration.
Now imagine that the trading bot from Section 4 has begun performing more complex tasks. As the bot gets smarter, it becomes more efficient to collaborate with other specialized agents rather than handling everything alone.
If each agent runs individually on EigenCloud, each one's actions are verifiable. But the moment agents collaborate with each other, new problems emerge as follows.
Problem 1: How can you trust the other party's results?
The trading bot received an analysis of "Bitcoin upward signal" from a data agent. How can it know whether this analysis was really calculated with the promised model or is a manipulated result?
Problem 2: How can payment for services be exchanged?
If the data agent provided analysis, it should receive payment. But agents have no credit cards. They cannot verify their identity. Existing payment systems were designed with humans in mind.
Problem 3: How do you know who is who?
Before the trading bot "hires" a data agent, it wants to check how reliable this agent has been in the past. How can the agent's history and reputation be queried?
EigenCloud is building infrastructure for these three problems.
The solution to the first problem is the propagation of verifiability.
If a data agent ran on EigenCloud, its output includes cryptographic proof. The trading bot does not just receive the analysis result but also receives proof that "this result was generated with this code, with this input, inside a TEE." Verifying the proof allows trusting the result.
Source: EigenCloud
A case where this structure actually works is elizaOS. This open-source framework, with over 50,000 agents deployed and more than 1,300 contributors participating, was designed after EigenCloud integration so that proof metadata is included in messages between agents. In particular, elizaOS has been positioned as the first generative token network supporting cross-chain execution through EigenCloud.
Source: EigenCloud
The case of FereAI shows why this is important. This company, building trading agents that manage real capital, faced the problem of being unable to audit agent decisions after the fact and decided to adopt EigenAI. After adopting EigenAI's deterministic inference, they became able to reproduce and verify decision processes since identical inputs guarantee identical outputs.
Source: Google Cloud
In September 2025, Google Cloud announced the Agent Payments Protocol (AP2), a payment protocol for AI agents. AP2 aims to build a standardized framework for on-chain AI agent-to-agent payments using the x402 payment protocol.
So when Agent A pays Agent B, how can you know whether B actually provided the promised service? In human transactions, brand reputation, legal recourse, and social trust mitigate this problem, but agent transactions lack these mechanisms.
The selection of EigenCloud as a launch partner for Google Cloud's AP2 is a case demonstrating the need for such a system. EigenCloud aims to extend blockchain-based verifiability to AP2, providing a programmable trust layer where AI agents can verify work, coordinate cross-chain payments, and enforce economic guarantees at global scale.
Specifically, EigenCloud adds the following three capabilities to AP2:
Work Verification: Server agents execute work on EigenCompute and deliver proof to client agents
Payment Abstraction: If client agents lack funds on the requested network, bridging is handled automatically
Dispute Resolution: If proof verification fails, client agents can slash server agents
Through the combination of EigenCloud and AP2, verifiability is added to inter-agent payments, technically eliminating the risk of "paying but not receiving service."
The third problem is identity. For agents to function as economic actors, they must be identifiable as "who" they are.
ERC-8004 is an Ethereum standard proposal addressing this problem. The core idea is to give AI agents on-chain identity. Agents have their own wallet addresses, hold assets, and can interact with other agents or smart contracts.
When EigenCloud is combined with this standard, history becomes possible beyond identity. Since all execution records of agents are stored in EigenDA, other agents can check the following before collaboration:
Did it behave as promised in the past?
Does it have a slashing history?
What kinds of tasks has it performed?
This is similar to credit scores or resumes in the human world, but differs in that all records are verifiable.
When propagation of verification, agent-to-agent payments, and agent identity are combined, the basic infrastructure for the agent economy is in place.
It is a structure where agents can verify each other's results, pay for services, and screen for reliable counterparties. The verifiability of individual agents covered in Section 4 extends to the verifiability of inter-agent collaboration.
EigenCloud defines agents with this structure as "Level 1 agents." When agents themselves run as AVSes, the restaked assets on EigenLayer guarantee that agent's honest behavior. Each agent has its own cryptoeconomic guarantees.
However, the value of verifiability is not limited to the agent economy. Situations requiring proof of "what actually happened" exist throughout the digital economy. Prediction markets must prove who adjudicates results; reputation systems must prove how scores are calculated; cross-chain bridges must prove messages were really delivered. EigenCloud's infrastructure is already creating real use cases in these areas. The following section will examine actual cases being built through EigenCloud beyond the agent economy.
In prediction markets, there is the "oracle problem." Smart contracts cannot fetch information from the external world on their own. Someone must input that information to the chain, but how can that someone be trusted?
Source: Kaito
In November 2025, InfoFi platform Kaito and prediction market platform Polymarket launched "Mindshare Markets." Kaito's AI model analyzes social media data to measure public opinion on specific topics, and prediction markets are settled based on these results.
They sought to solve the trust problem with Kaito's model through verifiability using EigenCloud's EigenAI. Kaito's AI model runs on EigenAI so anyone can reproduce the same analysis for identical input data, and Kaito became able to prove that its algorithm was executed precisely through EigenAI without having to disclose its algorithm.
Source: EigenCloud
Current platforms' rating algorithms are black boxes. Users cannot know what criteria determine which content is recommended or which accounts appear at the top. Even if platforms intentionally push or bury certain content, there is no way to verify. And even if open-source code is published, there is no guarantee that the code actually executed is that published code.
OpenRank points to these problems and is a reputation protocol seeking to transform ranking and reputation algorithms from platforms' private decisions into shared infrastructure. Through OpenRank, developers can calculate reputation scores on public data, and anyone can cryptographically verify that the calculation followed declared rules.
Source: EigenCloud
EigenCloud provides the infrastructure that makes this verification possible. OpenRank's computation nodes run reputation algorithms inside EigenCloud's TEE, and separate verifier nodes independently confirm results with identical inputs. All inputs and outputs are stored in the data availability layer accessible to anyone, and cryptographic commitments to results are recorded on-chain. If verifiers discover discrepancies, they can raise challenges, and dishonest operators have their assets staked on EigenLayer slashed.
Source: EigenCloud
Cross-chain applications face a fundamental challenge. How can messages between blockchains be verified without sacrificing security or decentralization?
Cross-chain messaging protocol LayerZero designed DVN (Decentralized Verifier Network) that allows applications to directly choose verifiers for messaging. Instead of forcing a single verifier, they enabled selection among various verification methods from ZK-based proofs to light client verification. LayerZero's architecture led a major transformation in the cross-chain ecosystem, but as LayerZero grew into a solution adopted by over 600 applications, a new question arose. How can stronger guarantees beyond technical correctness be added?
In November 2025, LayerZero launched EigenZero, a cryptoeconomic DVN framework utilizing EigenCloud's restaking infrastructure. By introducing slashable collateral as an additional security layer, verifiers can now provide economic deterrence against malicious behavior to applications.
EigenZero operates with optimistic verification. Messages are assumed correct unless challenged, enabling fast cross-chain communication. When verification failures occur, EigenCloud's slashing infrastructure imposes penalties on staked assets. An 11-day challenge period is provided for dispute resolution, and only proofs verified after transactions reach finalized state are subject to slashing. For these functions to work effectively, $5 million worth of ZRO tokens have been staked to guarantee honest behavior by verifiers.
This approach allows applications to design more multi-layered criteria for configuring cross-chain security. Whereas before, choices depended only on each DVN's technical verification method, now verifiers can be selected based on the amount of staked tokens and slashing history.
Source: EigenCloud
In November 2025, Flow Traders, one of the world's largest liquidity providers listed on Euronext, entered on-chain through Cap, EigenLayer, and YieldNest.
Cap is a stablecoin protocol designed with the goal of fully automated verifiable finance, where each fund manager operates capital deposited by stablecoin holders and distributes part of the returns to restakers and stablecoin holders. Unlike typical asset management protocols, Cap has delegators who guarantee fund managers' credit, and it utilized EigenLayer to automate the guarantee/redemption mechanism.
Delegators serve as guarantors by depositing ETH staked on EigenLayer, delegating their capital to fund managers. If fund managers fail to distribute returns normally, slashing occurs on the delegators' staked ETH. Cap also actively utilizes EigenLayer's redistribution mechanism: instead of burning slashed funds, they are delivered to affected stablecoin holders, isolating risk at the delegator level as much as possible.
The fact that listed financial institutions like Flow Traders are accessing on-chain credit through EigenLayer is significant as a case study. It represents an attempt to supplement credit relationships that previously relied on legal contracts with code and economic incentives.
Verifiable computation is only part of AI transparency. Questions such as what data an AI model was trained on, what the licenses for that data are, and whether original creators are properly compensated are also important. Current AI is a black box. There is no evidence of what data went in, what code was executed, or who holds the rights. Artists see their styles being reproduced but have no way to verify whether their work was included in the training set or under what license it was used.
Source: EigenCloud
In December 2025, EigenCloud announced a collaboration with Story Protocol. Story Protocol is a protocol that manages the provenance and licenses of data, models, and IP in a programmable way on-chain. It registers content (datasets, model outputs, etc.) on-chain from the source and cryptographically tracks who created it and when. These assets come with programmable licenses that define how they can be used and remixed, and all usage can be measured to automatically trigger royalty payments to contributors.
The combination of the two protocols forms a "Verifiable Intelligence Stack." If Story Protocol provides the provenance layer, EigenCloud provides the execution layer. If Story governs "what is permitted, how assets can be used, and who should be compensated," EigenCloud actually runs and verifies AI workloads under those rules.
Actual cases being built under this system include:
Poseidon: Large-scale datasets are crowdsourced, AVS networks verify and process them, and contributors receive royalties.
OpenLedger: Combines licensed base models and datasets to enable IP-safe fine-tuning, cryptographically proving that training complied with license constraints.
Verio: A decentralized IP enforcement system where AVS verifiers analyze model behavior and outputs to detect license violations and trigger collateral slashing upon violations.
Source: EigenCloud
In 2006, Amazon launched EC2 and S3, opening the era of cloud computing. Until then, companies had to purchase their own servers, lease data centers, and assemble IT infrastructure teams to operate web services. AWS replaced all of this with a few API calls. Once the heavy burden of infrastructure construction was lifted, applications like Uber, Netflix, and Airbnb could grow explosively.
Looking at the structure of the AWS stack reveals the logic of its success. At the bottom is the physical infrastructure of Amazon data centers, and on top of that sit first-party services like S3, EC2, and Lambda. On this foundation, thousands of third-party services like Datadog, Snowflake, and Anthropic are built, and at the top layer, tens of thousands of applications deliver value to users. Each layer abstracts the complexity of the layer below and provides a simple interface to the layer above.
Source: EigenCloud
The picture EigenCloud paints follows a similar layered structure.
At the bottom, the EigenLayer protocol provides shared security as a foundation, and above it, the EIGEN token's verifiable forking mechanism enables intersubjective verification. Primitives like EigenDA and EigenCompute provide core functions of data availability and verifiable computation, and on this foundation various AVSes like zkTLS, oracles, and inference services are built. At the top layer, verifiable applications like prediction markets, reputation systems, lending, and AI agents provide services to users.
If AWS said "build applications without worrying about servers," EigenCloud essentially says "build verifiable applications without worrying about trust." Just as AWS abstracted the complexity of infrastructure construction, EigenCloud is abstracting the complexity of building verification mechanisms.
Before concluding, let us return to the question raised in the introduction. In a world where AI makes more and more decisions, how can we know "what actually happened"?
Until now, the answer has mostly been "unilateral trust." Trust the service provider, trust the platform, trust the institution. But as the UnitedHealth case shows, situations where trust alone is insufficient are increasing. In a world where AI agents autonomously execute financial transactions, judge insurance claims, and recommend content, "just trust us" is no longer a sufficient answer.
EigenCloud's bet is clear. In the coming age of AI agents, and more broadly in the digital age supporting wider use cases, verifiability will become not optional but essential, and EigenCloud will position itself as the only verifiable and scalable computation solution.
Whether EigenCloud's bet will bear fruit is something we will likely see before long.
Dive into 'Narratives' that will be important in the next year