Summary: Lido Oracle contract is trustful - it relies on trusted third-parties and quorum mechanism for reports to maintain its state. It is possible to make the contract trustless, allowing any third party to adjust the contract state, and utilizing Zero Knowledge proofs to mathematically secure the correctness and validity of the change. This should lead to better security, cost savings and scalability.
Longer: Lido oracle contract is trustful - it relies on external oracle(s) to honestly calculate the total ethereum value committed by Lido validators (aka TLV), and uses quorum mechanism and a list of trusted members to protect from malicious actors.
This leads to three consequences:
Security: A dedicated and resourceful attacker can work towards acquiring control of the majority of oracle members - and abuse the oracle contract when it is achieved.
** practically this is a 51% attack, however at the moment there are only 5 trusted oracles (see getOracleMembers in the contract), so 51% attack essentially boils down to overtaking just 3 entities. There’s a proposal to increase the number of oracle members to 11 - this should make it considerably harder (need overtaking 6 entities), but still within a realm of practically feasible.
Cost: Contract requires a considerable amount of expensive storage read/write operations to manage members, check if reports are coming from a trusted source, and keep track of reports while quorum is being accumulated.
Scalability: With the network growing. the cost of quorum calculation grows linearly (O(N)) with the number of trusted oracle members
A trustless, ZK-proof secured approach can address these shortcomings in the following way:
Security: With proper construction, zk-proof can ensure that only honest calculations produce input that would pass validation. This will make quorum and membership management unnecessary - any input that passes validation can be trusted to be coming from an accurate and honest calculation.
Cost: With ZK-based solution, input validation can be limited to a small number (2-4) checks against Merkle Tree Root keccak hashes (obtained from a 1st party trustworthy contract(s) or provided by Ethereum network) + L1 contract invocation to confirm ZK-proof validity.
Scalability: since all honest and accurate calculations should produce the same outcome (there’s only way to sum all staked balances), it is sufficient for the contract to receive a single valid update (per some time frame - e.g. epoch), and don’t need multiple parties to report. This removes the need to scale the number of trusted parties as the network grows.
More concretely, this can be implemented utilizing StarkWare’s Cairo programming language and verifier. The solution will contain three pieces:
Cairo program to produce total value locked report and necessary ZK-proofs (Merkle Tree Roots)
L1 Ethereum contract to replace current oracle contract; most likely not much more complex than a “toy” contract used in one of the Cairo tutorials - essentially just a few lines of code.
Trustless oracle - it would gather the input for the Cairo program, run it, obtain proof of correctness from SHARP, and send them to the contract.
Sounds interesting. I assume then, the part of this solution would be deployed on StarkNet?
With the current state of StarkNet, I would consider this as something that could potentially be used in some later stages once Cairo and StarkNet get stable mainnet releases. It is still very early and experimental.
Maybe it is less risky in terms of centralisation, but definitely looks more risky in terms of technology currently. Getting Cairo contracts audited is also not trivial, and there are not a lot of experienced professional auditing companies doing that as far as I am aware.
I see a lot of value in working on this area, especially after the protocol upgrade that activates the withdrawals when the oracles contain more code. However, the development team cannot currently work on this because of the focus on the withdrawal-enabling upgrade. As a LEGO Nominee, I would be happy to propose this scope for a LEGO grant if you want to work on such a proof of concept.
I agree with all this concerns about early stage of StarkNet adoption but as I understood the topic starter is proposing to use cairo program locally for proof generation and feed this proofs to smart contract that can verify proofs. So StarkNet are actually not required here. Maybe I get the idea wrong.
@dgusakov TL;DR the contract will accept output from the cairo program (new value for TVL + Merkle Tree Roots of the inputs), verify the MTRs against “same” MTRs it will obtain elsewhere, and verify with an StarkWare’s contract in Eth L1 network the correctness and validity of calculation. If all checks pass - just update the TVL to the new supplied value
at a very high level, the verification will happen in the L1 contract. The call to update TVL will accept 3 parameters:
new TVL value
Merkle Tree root of all Lido validators’ keys
Merkle Tree root of the BeaconState, or at least a “simplified” versoin of it (minially viable subset is validators and balances attributes)
The contract will construct a Fact ID (details are here), and verify it with a StarkNet contract deployed in L1. If fact is confirmed as valid, it evidences that a computation was performed with a known program (program hash will be given to the contact at construction), given “some” input.
Remaining part is to ensure that the program was run on a “correct” input. This is done by comparing the MTRs in payload (that are calculated by Cairo program from the inputs) against corresponding MTRs it would obtain elsewhere, from trusted sources. I’ll omit the details for brevity, but in general “elsewhere” would be either from L1 builtins, or from a trusted 1st party contract operated by Lido. If MTRs match, it would mean that a correct input was passed to a program.
Thanks - that’s a good question. At this moment, it doesn’t look like there would be any StarkNet dependency - Cairo programs can be run standalone, and sent to SHARP for verification.
So the dependencies on StarkWare will be essentially Cairo (approaching 1.0 release, as far as I understand), SHARP (I guess also approaching 1.0) and FactRegistry (contract deployed in Eth L1) - no dependencies on StarkNet itself.
In future, when StarkNet reaches maturity it could be possible to replace some parts of the solution with a StarkNet contract, but it’s not strictly necessary.
@kadmil@ujenjt Awesome, would love to have a chat with you guys on the next steps. So far I have some partial work to confirm feasibility at a high level, but i would be happy to expand it further to a proof of concept level.
Exactly - the Cairo program will be run standalone, by the operator of the “trustless oracle”. Program’s input, output and code will then be submitted to SHARP (to prove the execution correctness via ZK proof), and then the Fact Registry.
I think the best way to describe this is this picture from Cairo blog:
Hey, I am super supportive of this effort since zk-poof based oracle will lower the risks you were mentioned in the first message.
However, there are some limitations and considerations exist that prevent delivering this feature in the shortish possible timeframe:
The BeaconState root isn’t exposed at the Execution Layer (at least now). I haven’t heard about any plans to include EIP-4788 in Shanghai/Cancun (i.e., shouldn’t expect to happen before mid-2023). Maybe you have some promising details here?
Currently, Lido oracle reports only TVL-related data (beacon balance and beacon validators number), though, once withdrawals are enabled, we expect that some significant changes will happen (e.g., withdrawal credentials address balance picked at the historical block coherent with a reportable CL epoch, number of exited validator keys and their belonging to the particular validator since the previously completed report).
Even if BeaconState root had been exposed, it could have required additional beacon spec constants or structures being exposed either in case of the complex report data.
And, nevertheless, I’m all for funding a proof-of-concept version through LEGO.
Hi! Those are very good points - thanks for asking!
The BeaconState root isn’t exposed at the Execution Layer
This is the main culprit to solve before making it a production system. One option to workaround this is to build a separate contract that would compute the MTR of the beacon state (or as I’ve noted in the post, a subset of it). How exactly to implement this workaround (i.e. on-chain contract vs. off-chain oracle) is an open question. I don’t have complete answer to this yet, but I believe it’s a solvable problem (and solvable in an efficient way).
A more straightforward solution is to wait until EIP-4788 is implemented, but of course that’s “when” and “if” it is implemented.
once withdrawals are enabled, we expect that some significant changes will happen
I think one way to support this would be to turn to a “event sourcing” solution - i.e. replace the “list of Lido validator keys” with “list of events that affect Lido validator keys”; basically making the input append-only and allowing to derive more reports out of it (e.g. number of exited validator keys would be straightforward). Such history can than be kept offchain and the correctness of a history supplied to the computation (cairo program) can be ensured in a similar way - via Merkle Tree root stored onchain.
Even if BeaconState root had been exposed, it could have required additional beacon spec constants or structures being exposed either in case of the complex report data
That’s a good point! This highlights another important trait of this approach - the contract does not care how the oracle obtains the data - as long as it is correct. Correctness is ensured via computing a Merkle Tree root of the BeaconState in the oracle and comparing it with the one available to the contract. The implied way for the oracle to obtain BeaconState is to query it from a Eth2 API endpoint through any provider of their choosing (i.e. Infura, Alchemy, CLoudflare, self-hosted, etc.)
… but the community guidelines bot seems to have gotten on a killing spree and flagged virtually every message (including the one with the resutls I’m trying to post) as a potential spam.
For now I have put it into a google doc, but I’m happy to move it to a dedicated post, if necesary
In short, if “access to all on-chain data” claimed by Axiom in their announcement means “can access both execution and consensus layer data” (BeaconState in particular), it should be possible to build similar solution on Axiom.
Longer:
More precisely, the three components of this solution are:
Ability to perform computations and confirm their correctness/validity/etc. - “arbitrary expressive compute” sounds like it.
Access to Lido validators’ public keys - these are managed by an Execution layer contract, so this is covered by Axiom as well.
Access to Consensus layer BeaconState, or at least validators’ keys and balances - this part is not clear.
As far as I understand (and correct me if I’m wrong), The Merge actually merged the old PoW (aka Eth1.0) chain and Beacon Chain, by embedding execution layer blocks into the Beacon Chain blocks (as far as I can judge from the spec).
Axiom demos (twitter post, demo app) only demonstrate access to consensus layer randomness provider, so it’s not clear if access to consensus layer state is supported.
Thanks for the analysis. I’m assuming it probably means EL data only, but we should have BeaconState on EL hopefully within 2 (max?) hardforks after Shanghai. May be worth investigating / follow up in case there is some way (similar to how the randomness is accessed, although I think that that is something that is explicitly included in EL following the Merge via RANDAO which is pushed into the EL data by the CL, whereas beaconstate is not yet done so).