Verification can be considered a human instinct, but as the complexity of digital computing environments increases, areas that users cannot directly verify are expanding. Accordingly, academia and industry have been steadily conducting research on "verifiable computing."
EigenCloud and Boundless are representative projects currently leading with the motto of "verifiable computing." While these two projects may seem to be conflicting due to their seemingly common goals, they actually target verifiability in different domains.
EigenCloud demonstrates strength in areas requiring intersubjective judgment based on economic security, while Boundless excels in areas requiring verification of objective and deterministic computations based on zero-knowledge proofs. Therefore, these two projects are likely to operate complementarily as verifiable computing expands in the future.
Source: Luminate
Throughout the day, we repeat hundreds of large and small verifications. We contact friends again to confirm meeting times scheduled last week, ask friends and colleagues whether Squid Game 3 is really interesting, and browse multiple online shopping sites and YouTube reviews to buy a keyboard. All these actions are so natural and instinctive that we forget they are conscious acts of 'verification.'
However, technological advancement has created areas where we cannot undergo verification. In the early internet, most software was light enough to run on personal computers, allowing users to directly check the programs they were running. But the situation has fundamentally changed due to increased software complexity and the resulting spread of cloud computing environments. Now we interact with dozens of remote servers daily. We check emails, browse social media, use online banking, and watch streaming services. In all these activities, we must trust that invisible servers operate correctly, and most people don't actually try to directly verify actions occurring on the internet.
The rapid development of artificial intelligence has made this verification problem even more complex. LLMs have undergone rapid development, becoming closer to black boxes that even developers don't fully understand. With the development of devices like MCP (Model Context Protocol) that freely access external services, they have gained tremendous autonomy. In this situation, verifying facts such as "Was this decision really made by AI?", "Did humans not intervene in the middle?", "Was the AI not trained with biased or manipulated data?" is becoming increasingly important.
Unlike humans' instinct to seek verification, the difficulty of verifying digital environments has created a gap that doesn't lead to actual verification actions. The computing academia has worked to implement ways to provide results to users with infrastructure-level computation verification to bridge this gap, which has developed into the field of "Verifiable Computing."
Source: Makery
Implementing verifiability requires solving three fundamental dilemmas.
The conciseness dilemma is the most practical problem verification systems face. If verifying some computation result requires more resources than the original computation, there would be no reason to introduce a verification system in the first place. This becomes a critical problem especially in environments like blockchains where thousands of nodes must verify the same computation. If verification isn't efficient, the entire system's scalability becomes fundamentally constrained.
The integrity dilemma relates to the essence of verification. Simply having the correct final result isn't sufficient. It must guarantee that the computation process itself was performed as intended and that no manipulation or tampering occurred in the middle. For example, just because an AI model produces the correct answer doesn't mean its reasoning process is transparent and trustworthy. Result accuracy and process integrity are separate issues.
The trustlessness dilemma concerns the philosophical foundation of verification systems. The verification process itself shouldn't depend on trusted third parties. If verifying the verifier requires another verifier, this could lead to an infinite trust chain. True verifiability means a system where anyone can independently confirm results without depending on external authority or trust.
How can these conflicting requirements be resolved? The computing academia has approached this in three main directions, each implementing unique forms of verifiability based on different philosophies and technical characteristics.
Re-execution based verification is the most intuitive and transparent method. It performs the original computation from start to finish again to verify that all intermediate steps and final results match. The biggest advantage of this approach is complete transparency. Since all computation processes are disclosed and verifiable, no manipulation can be hidden. However, it has the fundamental limitation of requiring as much storage space and processing time as the original computation. Blockchain consensus mechanisms are representative examples of this approach, introducing censorship resistance that can maintain system integrity even when only some nodes in the entire network perform computations to mitigate the conciseness problem.
Cryptographic verification is a method that simultaneously pursues efficiency and security through mathematical rigor. The most representative approach is Zero-Knowledge Proof, which can simultaneously resolve trustlessness and integrity problems since it doesn't disclose actual data or computation processes. Trusted Execution Environments (TEE) also belong to this category, implementing a method where computations are performed in secure environments provided by hardware manufacturers and results are cryptographically signed to guarantee integrity, resolving the trustlessness dilemma by separating software execution entities from verification entities.
Economic security-based verification is an approach mainly used in web3 environments, a verification method based on economic rationality instead of mathematical absoluteness. This approach starts from the observation that people generally pursue their economic interests, achieving system security by providing rewards for correct behavior and punishment for incorrect behavior. The biggest advantage of this method is flexibility - it can be easily applied even in areas where implementing complex mathematical proofs is difficult, and response methods when errors occur are fast and intuitive. Additionally, it can reflect not only technical accuracy but also social consensus, making it usable in areas requiring subjective judgment.
Each approach has unique strengths and limitations, and in actual applications, appropriate methods are selected based on requirements or multiple approaches are combined. In the next section, we'll examine EigenCloud and Boundless, projects representing two contrasting approaches among these: cryptographic verification and economic security.
When you hear the word "Verifiability," what projects come to mind?
Two projects immediately come to my mind: EigenCloud and Boundless. EigenCloud has been a project that has placed verifiability as its top priority since its EigenLayer days until now, and Boundless has recently been educating retail about the importance of verifiability through the interesting phrase "Verifiable Compute."
While these two share the common goal of expanding "verifiability," there are few articles discussing whether the verifiability they address is actually the same. Are the "verifiability" they target identical, and are they in a competitive relationship? The following sections will examine how EigenCloud and Boundless each implement verifiability and what differences exist in the markets they target.
EigenCloud is a project announced when EigenLayer, an Ethereum restaking protocol, declared its expansion to a "verifiable cloud." While EigenLayer is already a well-known project, I'll briefly explain it since EigenCloud's security is still based on EigenLayer.
EigenLayer started with the simple question: "Why can't the same capital be used multiple times?" Traditionally, each blockchain or protocol had to build its own validator set and token economics, creating high barriers to entry where new protocols needed to attract massive capital to secure sufficient security. Additionally, each protocol's security became dependent on the market value of its token, creating a vicious cycle where security weakened when token prices fell. EigenLayer emerged by introducing restaking, a mechanism that was very innovative at the time. By allowing ETH already staked on Ethereum to be reused for other protocols' security, it greatly increased capital efficiency and enabled new protocols to immediately obtain strong economic security. Based on this, EigenLayer built an AVS (Autonomous Verifiable Services) ecosystem, providing an environment where various services depending on off-chain computing environments in the web3 ecosystem could operate with higher reliability by being granted verifiability.
Source: EigenCloud
EigenLayer later expanded the project's vision more broadly, creating "EigenCloud" by presenting a new paradigm that combines cloud programming capabilities with blockchain verifiability, beyond simply sharing Ethereum's security.
Setting aside the technical structure, the most unique aspect of EigenCloud is its ability to provide verification capabilities for areas requiring "intersubjective judgment." Intersubjective verification refers to judgments that cannot be mathematically proven but can reach the same conclusion when viewed by two rational people. These aren't problems that can be calculated with mathematical formulas, but objective answers can be obtained by synthesizing information from reliable sources. Intersubjective judgment is needed in various areas regardless of web2/web3, such as whether DAO governance proposals properly reflect community will, whether memecoin incentive distribution is fair, and whether social media content moderation is appropriate - all can be verified through intersubjective judgment. EigenCloud provides a framework that can replace existing centralized judgments with decentralized economic security executed through mutual agreement between two rational groups in these areas.
How does EigenCloud specifically implement such intersubjective verification? EigenLayer announced its expansion to EigenCloud, introducing two new components, EigenVerify and EigenCompute, in addition to EigenLayer and EigenDA. These two components serve as key elements that significantly extend the intersubjective verifiability capabilities previously achievable through $EIGEN and EigenLayer to various environments.
EigenVerify is the heart of EigenCloud, handling both existing re-execution/zero-knowledge proof-based objective verification and intersubjective verification. EigenVerify processes each judgment through majority consensus of operators who stake $EIGEN. When any application has a situation requiring intersubjective judgment, anyone can submit a verification request to EigenVerify, and EigenVerify operators comprehensively review information from reliable sources before making judgments by majority vote.
What's important here is that operators stake their economic interests in their judgments. If the majority of operators make clearly wrong judgments, this becomes malicious behavior observable on-chain. At this point, anyone can burn a certain percentage of $EIGEN to start a token fork. When a fork begins, the community decides which side made the correct judgment, and in the correct fork, malicious operators' staked assets are burned while the person who started the fork receives rewards.
This mechanism is powerful because it combines economic incentives with social verification. Operators must risk losing their assets if they make clearly wrong judgments, while the entire community serves as the final judge. This enables obtaining trustworthy answers to complex real-world problems without depending on centralized oracles or judgments from a few experts.
EigenCompute is an abstraction layer that makes this verification mechanism easily accessible to developers. From a developer's perspective, implementing complex economic security mechanisms directly every time they create applications requiring intersubjective verification is practically impossible. EigenCloud enables developers to use verification services through simple APIs via EigenCompute.
Developers simply need to write their application logic in regular Docker containers and deploy them to EigenCompute. Inside containers, any programming language can be used, necessary libraries can be freely called, and external APIs can be accessed - this is almost identical to existing cloud development environments. The difference is that container execution results are automatically recorded in EigenDA and can receive verification through EigenVerify when needed.
EigenCloud's verification method enables implementing a different security model from cryptographic verification, as verification is conducted with assets as collateral, allowing differentiated security levels to be adopted based on application importance. EigenCloud can choose to use container execution results immediately, adopt an optimistic security model that grants a certain period for objections, or select a strong security model that forks the service along with $EIGEN when malicious behavior is discovered, burning malicious actors' tokens. This mechanism creates a verification system that includes not only economic incentives but also social consensus.
Source: Boundless
Boundless is a prover marketplace protocol based on RISC Zero's zkVM, aiming to dramatically reduce the speed and cost of proof generation by having multiple provers competitively generate and auction proofs through its marketplace. RISC Zero focuses on converting the integrity of off-chain programs and data used in on-chain environments into verifiable forms using Boundless's cheap and fast proof generation speeds and overwhelmingly performant proprietary zkVM.
Representative implementations of RISC Zero include OP Kailua, which enables existing optimistic rollups to easily delegate proof generation and convert to ZK rollups, and Steel, which cheaply and quickly provides computations using blockchain's past state values that previously had to depend on external indexers. RISC Zero aims to supply almost all computations that can be implemented off-chain to on-chain in verifiable forms based on the highly flexible RISC-V architecture, with the intention of placing the Boundless protocol underneath to handle decentralized and efficient ZK proofs. This has led to Boundless protocol being developed with the motto of being a protocol that eliminates the limitations or boundaries of on-chain computation (= Boundless).
Verification through Boundless presents different verifiability from EigenCloud. Since Boundless can provide mathematically perfect verification capabilities for all areas where "objective proof" is possible based on ZK proofs, the verification areas Boundless targets can be said to be deterministically computable operations with clear inputs and outputs, where there's no room for social consensus or subjective judgment.
Looking more specifically, Boundless presents its own verifiability beyond the mathematical rigor provided by RISC Zero's ZK proofs, with the core being Proof of Verifiable Work (PoVW). PoVW is a protocol that cryptographically measures and verifies the actual computational work performed by each prover. When generating proofs in zkVM, the exact number of computational cycles required for that proof is included as cryptographically secure metadata. This metadata cannot be manipulated and includes unique nonces to prevent duplicate aggregation of identical proofs. Through PoVW, the Boundless protocol enables objective measurement of how much computation a prover actually performed when claiming to have generated a specific proof. Additionally, provers receive token rewards based on the work recorded during the PoVW process, but if errors are discovered during verification, the proof immediately becomes invalid and the prover receives no rewards, combining mathematical rigor with economic mechanisms.
Consequently, these characteristics of Boundless make the verifiability it provides specialized for objective and deterministic computation verification. Blockchain rollup state transitions and complex computation results supplied through external oracles are all deterministic computations with clear inputs and outputs, and ambiguity doesn't exist in these areas.
Source: Boundless
After examining both projects, I believe there are some differences in the scope of "verifiable computing" the two projects target, and they will increasingly develop complementarily. EigenCloud's economic security will demonstrate its strengths in verification areas requiring complex software or subjective judgment that are difficult to fully prove with zero-knowledge proofs, while Boundless's ZK proofs will each demonstrate their strengths in objective and mathematically verifiable areas. Additionally, in actual applications where "verifiable computing" is widely adopted in the future, these two will often be used together. As a concrete example, the slashing mechanism that EigenLayer had difficulty implementing due to challenges in tracking numerous operators' staking token quantities and on-chain activities was implemented through RISC Zero's zkVM. This shows that the two projects clearly have specialized areas and are not targeting conflicting verifiability.
Ultimately, both platforms pursue the larger vision of a "verifiable internet." A future where most digital services we use become verifiable - a world where we can verify AI decisions, be confident in the accuracy of financial calculations, and prove that personal information was processed correctly. What we're witnessing is not simply technological advancement, but a historical turning point where the paradigm of digital trust itself is changing. It's a transition from "Trust, but Verify" to "Verify, to Trust," which has the potential to reorganize our society's entire digital infrastructure. EigenLayer and Boundless will be key drivers in creating this future.