A proposal for partnering with Nethermind to design a mechanism for good validator set maintenance. Phase 2

TL;DR

This a proposal to fund Nethermind to design a Sybil and white-labeling resistance mechanism. The delivery will include a detailed discussion of the final design and a systematization of knowledge for Sybil resistance mechanisms, oracle systems, prediction markets, and token-curated assets. During the project, a dedicated team will investigate what the state-of-the-art is, what solutions are or will be used in practice, and how. Then we will propose concrete mechanisms to make Lido’s network Sybil and white-labeling resistant. The project will be one of the steps toward enabling Lido to onboard new operators in a permissionless manner. This is a continuation of our previous proposal. The project will take 28 weeks, and its cost — 700 000 DAI — will be covered by Lido DAO.

Proposer and work mindset

Proposer

Michał Zając on behalf of Nethermind.

About Nethermind

Nethermind is a team of world-class builders and researchers with expertise across several domains: Ethereum Protocol Engineering, Cryptography Research, Layer-2s, Decentralized Finance (DeFi), Miner Extractable Value (MEV), Smart Contract Development, Security Auditing, and Formal Verification, among others.

Our Research team comprises mathematicians and engineers who work on analyzing, breaking, and designing blockchain and cryptographic schemes. Our expertise and interests span the fields of zero-knowledge proofs, non-deterministic programming, Distributed Validator Technology, liquid staking, and decentralized identity.

Working to solve some of the most challenging problems in the blockchain space, we frequently collaborate with renowned companies and DAOs, such as Ethereum Foundation, StarkWare, Lido Finance, Gnosis Chain, Aave, Flashbots, xDai, Open Zeppelin, Forta Protocol, Energy Web, POA Network and many more. We actively contribute to Ethereum core research and development, EIPs, and network upgrades with the Ethereum Foundation and other client teams.

General work mindset

The following principles will drive the development of the protocols:

  • All the design considerations and risk analysis will be done with the consent of the Lido DAO.

  • Nethermind will set up a dedicated team for this effort.

  • All proposed solutions will come with security analysis. When available, the protocols’ security will be proven.

  • Milestones and deliverables will be small to ensure a good overview of the team’s progress.

Terminology

  • Operator: A party that runs, or participates in running, one or many Ethereum validators. Operators, solely or jointly, have access to the signing keys of one or more validators but do not necessarily have control of the corresponding withdrawal credentials. Operators can control multiple nodes.

  • Node: A virtual sub-party (a piece of hardware and software) controlled by an operator that performs the operator’s jobs w.r.t. a concrete validator. When an operator is a party that may control multiple validators, a node is a representation of a concrete validator.

  • White-label operators: If a party, who was onboarded as an operator, delegates the operation of a node to another party, we call the latter a white-label operator.

  • Sybil operator: We call a party Sybil if it controls two or more operators behind the scenes. A Sybil-protection mechanism is a set of countermeasures that makes it difficult for a party to have two (or more) operators onboarded such that the protocol is unaware they are colluding.

  • Protocol score (or score): protocol’s internal scoring system that measures whether the operator contributes to the good quality of the set of operators.

  • External reputation: operator’s reputation in ecosystems external to Lido, e.g., in real life, in Web2 services, in other Web3 services, etc.

  • Arbiter protocol: we call a protocol “arbiter protocol” if it is either a decentralized oracle, token-curated asset, or prediction market.

Ideal mechanism overview

An ideal mechanism evaluates Lido’s DAO validator set according to the operator & validator set strategy described in this note by Lido. More precisely, the mechanism must have methods for improving the validator set if there is an option to do so. It must have zero input from permissioned roles (i.e., no admins/committees). Furthermore, it must have an input of low to zero impact from LDO, stETH, and ETH token holders.

The mechanism has to be capital efficient: Collateral for operators can be used, but it can’t be the single or primary mechanism; it has to function mainly by staking with other people’s money.

The mechanism has to account for the bull-bear cycle effect in a way that would allow operators to stop validating if it becomes too expensive. Additionally, the mechanism has to reduce the number of operators in bear markets and expand in bull markets.

The mechanism has to prevent the set of operators from becoming worse. This includes, but is not limited to, avoiding the following:

  • reduced performance,

  • reduced neutrality,

  • offline time,

  • slashable offenses,

  • reduced jurisdictional geodiversity,

  • reduced localization diversity of the infrastructure,

  • reduced Ethereum client diversity and other diversity vectors,

  • giving up independence (e.g., in a merger),

  • destructive MEV,

  • delegation of operation (delegating operator duties should reduce the amount of stake that the operator can control, potentially removing it from the operator set altogether).

Improving operational quality should increase an operator’s revenue (by increasing the stake or the commission).

The stake should be distributed flat-ish. Operators should only control up to 1% of total ETH staked through Lido.

The mechanism cannot overfit on any particular parameter, but most importantly, it cannot overfit on performance: super-performant operators often cut corners or sacrifice specific attributes for others. Furthermore, overfitting performance and profitability is inherently centralizing due to economies of scale and, in general, cost-minimizing (i.e., locating in places w/ the cheapest servers, bandwidth, etc.). That being said, the mechanism has to ensure an overall good level of performance

The mechanism should allow for a new operator to enter the set of operators with essentially no collateral or reputation and work its way to an optimal position within the network of operators. That should be possible, although it may take a long time, if the operator has a “good enough” performance and is ecosystem aligned, independent, and runs its hardware in non-concentrated geographical/jurisdictional areas. There might be a need for an insurance pool or collateral to enter at zero or to rise to the top, but it could be optional in the middle.

The amount of stake controlled by an operator should depend on a "protocol score.” This score should reflect how much the operator contributes to having a good overall validator set. In particular, an operator joining the protocol should be given a low to neutral score, implying that it can control a very limited stake. The score, and thus the amount of the controlled stake, should be increased when contributing to some or all of the following. We note that the exact scoring mechanism is yet to be researched, so the list below is provisional.

  1. Providing additional bond.

  2. Providing good quality services. Users can build their reputation (and thus score) by providing good services. Defining what “good services” means will be part of Phase 3.

  3. Providing information about itself, for example, revealing its Web2/Web3/real-life identity or other credentials such as educational institution diplomas, GitHub activity, hackathon awards, etc. Ideally, this information will be provided in a privacy-preserving yet verifiable manner.

An operator that provides its identity has more to lose than just bond when it misbehaves. Its external reputation is at stake. Additionally, an operator that (verifiably) reveals technical knowledge credentials is more likely to operate its validators properly. In some scenarios, this could increase the Sybil resistance of the network (though this increase is certainly limited, and a complete Sybil resistance solution would need to rely on further mechanisms).

Notably, users who want to remain anonymous would still be able to control a substantial stake by providing bond (Point 1) or gaining a reputation by providing good quality services (Point 2).

General objectives

“We will assist Lido in creating and maintaining a permissionless and high-quality validator set mechanism. This entails:

  1. Designing and implementing methods to ensure that validators are run by a high-quality set of operators. In particular, each operator performs its duties on its own and does not cede them to an external party (i.e., it does not hire a white-label operator), nor is secretly associated with other operators, and ensures that its hardware and software run performantly.

  2. Conducting a thorough economic analysis to understand how the market fluctuations, or changes in the Ethereum protocol itself, can compromise the system’s security.

The project will be divided into four phases:

  • Phase 1: We survey the literature and state-of-the-art approaches to identity and attestation schemes. This phase has been completed already.

  • Phase 2: During this phase, we will survey the literature and state-of-the-art approaches to oracles, token-curated assets, prediction markets, Sybil, and white-labeling-protection mechanisms. The present proposal focuses solely on this phase.

  • Phase 3: Next, we will proceed to design solutions for assuring a good quality set of operators and economic security of the protocol. We will also describe the resources required to implement the solutions proposed in Phases 1, 2, and 3.

  • Phase 4: This phase is mainly concerned with implementing the solutions designed during Phases 1, 2, and 3. We will also research some extra topics and problems, as done in the previous phases, and afterward, we will implement them. Further information on this phase will be provided later, by the end of Phase 3.

Project Objective

Phase 2. Sybil and white-labeling resistance mechanism design

In this part of the project, we will focus on one of the crucial aspects of the security of a permissionless staking protocol. Namely ensuring that:

  1. Operators are separate entities. That is, there are very few parties that control multiple operators secretly, and no party controls the majority of the operators.

  2. Operators perform their duties independently and don’t use third parties, so-called white-label operators, to do them on their behalf.

In both cases, an entity that runs multiple operators (whether by controlling Sybils or being a white-label operator) could have too much control over stake, and the protocol in general. This would worsen the overall health of the protocol, weaken its resistance against correlated slashing, and could introduce a single point of failure.

We emphasize that even if we ensure that all onboarding operators are honest, we still need to have a system that detects dishonest parties within the set of already onboarded operators. This is because Sybils/white-label operators can be created among the onboarded operators, even if these operators honestly entered the system. The latter may happen, e.g., when one entity that runs operators buys another that also runs operators. In that case, the buying party may end up controlling too much of the stake.

Our work plan for developing a solution for Sybil and white-labeling resistance will begin by researching several techniques that, we believe, have the potential to lead to a solution in isolation or as part of an amalgam. The different types of methods we will explore are listed below. In particular, we will investigate credential and arbiter protocols. The former could help fight Sybils. The latter could be used both to fight Sybils and detect white-label operators. Namely, a party that suspects that some operator is a white label could raise that issue and make a corresponding prediction market where people can bet on whether they believe in the claim. Then the conflict would be resolved by an assigned resolution mechanism.

The final goal of our work will be to produce a report explaining exactly how the Lido network can use such methods.

TASK 1 Sybil and white-labeling resistance. SoK. We will begin with supplementing the SoK from Phase 1 with an SoK for Sybil and white-labeling resistance. We will look for such mechanisms used not only in Web3 but also in Web2 and, if necessary, real life.

This part will take 4 weeks.

TASK 2 Limiting Sybils by using credentials. Operators who wish to control a lot of stake may be willing to use their credentials to ensure they are not Sybils and increase their score by putting their external reputation at stake. To preserve the operators’ privacy and limit the system’s reliance on the externally issued data, we will propose a mechanism that does not learn or store the credentials but only uses them to ensure that the entity presenting them is not trying to cheat the system. We will use zero-knowledge proofs to protect the users’ privacy. In this part of the project, we will rely on the SoK on decentralized identities and verifiable credentials we delivered in Phase 1. We will also discuss how to make it harder for dishonest operators to use credentials unrelated to them (e.g., bought on a black market).

We will propose mechanisms (one per credential type) that may be used to incorporate

  • real-life credentials

  • Web2 identities

  • Web3 identities

to the Lido protocol. To preserve operators’ privacy, we will ensure that only minimal information about them is revealed.

This part will take 7 weeks.

TASK 3 Arbiter protocols. SoK. We expect some core components to be decentralized oracles, token-curated assets, and prediction markets. We will use these "arbiter protocols,” as we call them, to assess whether

  • a prospective operator will provide good quality services and contribute to the quality of the operators’ set or

  • an already onboarded operator is Sybil or uses a white-label operator.

As a first step, we will prepare an SoK for the following topics:

  • decentralized oracles

  • token-curated registries

  • prediction markets

This part will take 5 weeks.

TASK 4 Arbiter protocols. Resolution mechanisms. In this part, we will design a mechanism that resolves disputes in arbiter protocols. We will propose concrete setups for decentralized oracles and prediction markets. We will specify which parties make the oracle and who resolves disputes in prediction markets. We will analyze the feasibility of using already onboarded operators as part of an oracle system. Similarly, we will also explore the idea of using LDO holders as a resolution mechanism for prediction markets and oracles. To protect operators from being unjustly accused, we will design a mechanism that allows such operators to raise a flag and notify the DAO of the resolution mechanism’s wrongdoing.

This part will take 8 weeks.

TASK 5 Incentivizing entities to be transparent about the nodes they are operating

In this task, we will design an economic mechanism that incentivizes entities not to hide information about the nodes they are running. While this will not deter highly motivated and malicious operators, it may be enough to prevent many semi-honest or rational entities from creating Sybil accounts. We will analyze the feasibility of designing a robust mechanism. If the feasibility studies are concluded positively, we will propose an incentivizing mechanism that could lead to a more robust system against Sybils. We emphasize that we will require the mechanism to not harm operators’ privacy.

We will work on such mechanism for 4 weeks.

This phase will be completed within 28 weeks from the date of the agreement.

Organization, Funding, and Budget

Nethermind will create a dedicated team to run this project.

The project will be funded by Lido DAO. The DAO will pay Nethermind 700 000 DAI, of which 50% (350 000 DAI) will be paid upfront and the remaining 50% (350 000 DAI) on delivery.

At the end of the project, the LEGO council will decide whether the provided systematization of knowledge meets the agreed requirements and, if that is the case, proceed with the payment. In the case of disagreement between the LEGO assessment on the quality of the deliverables and Nethermind, Lido operators will be used as a resolution mechanism.

The payment will be made to address eth:0x237DeE529A47750bEcdFa8A59a1D766e3e7B5F91

Next steps

We want to put this proposal to a vote in 14 days. The voting will remain open for 7 days.

6 Likes

So this is only research and no implementation?

In that case we would be paying for a research paper and that’s it (stating the obvious). However, stating that because implementation would not be simple and it would take a ton more resources.

In this kind of market why do you think it is beneficial for Lido to give out large chunk of it’s treasury and runway to fund public research?

P.S. Is 700k for this phase 2 only or rest of the phases?

3 Likes

Hi Marin!

Yes, this phase is research only, and 700k is to fund this phase. Implementation has been tentatively scheduled for later phases.

I agree that implementation will require a lot of resources. But so does the research, which, to be done correctly, requires time.

In this proposal, we offer much more than a research paper. We will design a mechanism limiting the possibilities of creating Sybils and using white-label operators in Lido. In the long term, this will be absolutely essential for the protocol’s security and to ensure the quality of the set of validators.

Unfortunately, topics of Sybil resistance and white-labeling are not well researched in the context of Web3. We need to go through existing solutions and proposals to see which (if any) ideas can be repurposed for Lido and develop new ideas. This takes time. We would be happy to begin implementation at once if it were possible, but this is essentially an open problem in the Web3 field right now and there is not enough understanding of what needs to be built.

Our current mindset is to rely the solution on what we call arbiter protocols (oracles, token-curated registries, prediction markets). These protocols need to be properly incentivized to give quality answers. However, incentivizing arbiter protocols is difficult and takes a lot of work. First of all, a lot is at stake. Secondly, e.g., the solutions we have already seen often reduce to a Keynesian beauty contest, which limits their usability.

Regarding the research being public: we will disclose our findings publicly. This will allow everybody to review and check its quality. If we were to keep the research private (and disclose it with only, say, the LEGO council), we would violate the transparency needed for the DAO to examine the results of the work they funded.

4 Likes

Hi Michal

Thank you for this post, will provide some thoughts below.

  • This proposal is a request to fund Nethermind’s cryptographic research for the next 28 weeks, for an investment of 700,000 DAI (50% upfront, 50% later)
  • This research is a continuation of Phase I of the research proposal, with results here and here
  • The overall project is scheduled to cover four phases of research:
    • I: Survey the literature relating to identity and attestation schemes
    • → II: Survey the literature relating to oracles, token-curated assets, prediction markets, Sybil, and white-labeling-protection mechanisms (we are here)
    • III: Design solutions for assuring a good quality set of operators and economic security of the protocol
    • IV: Implement the proposed solution presented in Phase III
  • The overarching goal is to assist Lido in creating and maintaining a permissionless and high-quality validator set mechanism
  • Phase I was executed at an effective rate of 25k DAI a week, over 6 weeks
    • Lido has invested 150k into this project thus far, but token holders should not weight sunk costs into their decision-making
  • Phase II continues to use the same effective rate of 25k DAI a week, over 28 weeks.

I have no doubt that researching, developing and implementing a solution to maintain a permissionless and high-quality validator set mechanism is a costly and time-consuming exercise. The level of technical expertise required is probably very high and therefore very scarce and therefore also costly.

We have no comments as to the technical merits of this proposal or its results thus far, which appear remarkably successful and well received. However, what is perhaps missing from this proposal is a sense of what the whole project might entail economically for the DAO, end-to-end.

Phase II clearly establishes that research alone will continue to cost the DAO 25k a week. However, we do not understand how much more research is needed in future phases, nor what additional costs might have to be incurred to implement a technical solution.

Overall, we definitely appreciate sequencing a complex project such as this one as, in the long-run, it could help the DAO manage the risk of investing in a complex enterprise such as deploying a permissionless and high-quality validator set mechanism. However, for this to be true, the DAO would have had to have at least an 80% confidence interval estimate for the midpoint of a range of possible costs and time-spans for the total project.

Could we kindly suggest that the Nethermind team give us a lifetime estimate for the total cost of the project, from the current proposal through to implementation? I understand that some Phases are path-dependent on prior phases. However, we don’t believe it is the right approach to make these requests piecemeal, as the DAO will slowly digest what could well end up being a multi-million DAI research and development expense over the next few months.

Cost Time Context Stage
Phase I 150,000 6 weeks Research Completed
Phase II 700,000 28 weeks Research Proposed
Phase III Development TBD
Phase IV Implementation TBD
Implementation Expenses Implementation TBD
Phase V?
Total Capex
Ongoing (if any) Annual Maintenance

Without this information, I cannot imagine how token holders could take an informed decision regarding continuing to support such a project if it aims to wind along through various additional phases and implementation.

Thank you for your help in bringing this data together, please let us know if we have missed this information somewhere

10 Likes

Thanks for the post @steakhouse, and sorry for the late reply.

The initial idea for Phase 3 is to define what a “good quality set of operators” means. We plan to design a scoring system to quantify the set’s quality. For example, we could assign the score based on:

  • performance — that could come from an oracle, like Rated),
  • geolocation — which could be self-reported or obtained from IP
  • client the operator is using — also could be self-reported
  • destructive MEV — the operator could pledge to take only certain types of MEV
  • and so on

To succeed in that phase, we will need to design mechanisms that take the abovementioned data and put them on chain. As some data is self-reported, we need a tool to resolve conflicts that could occur if someone accuses an operator of reporting incorrect data.

We will also need to design a mechanism to reward operators with good scores and penalize those whose scores drop. (There are many open questions here, e.g., if we rely on the amount of stake an operator can control, what happens if the operator’s score significantly drops?)

We hope to re-utilize some building blocks developed in Phase 2. However, we may need to analyze the economic security of the building blocks for each use case separately. Introducing a scoring system, requiring operators to do additional reporting, etc., may change the protocol’s incentive structure, which will also need to be analyzed.

Phase 4 focuses on implementing mechanisms designed in Phases 2 and 3: arbiter mechanisms (prediction markets, oracles) and scoring systems. We will also need to implement functionalities that provide data to compute the score, resolve prediction markets, etc. Furthermore, we will also need to ensure that all building blocks work together as designed and make a secure and robust protocol.

While we have preliminary scopes of Phases 3 and 4, we only have a rough estimation of the earlier phase. Namely, Phase 3 should be at most 24 weeks and cost no more than 600 000 DAI. If the budget for the stage needs to be substantially changed, we will provide detailed information to justify the change.

Unfortunately, estimating the budget for Phase 4 is very difficult right now. This is because: we don’t know which arbiter mechanism will need to be implemented: identification of the arbiter mechanism will be made as part of Phase 2. Would it be a prediction market, oracle or both? We also don’t know yet which data we will need. This will also be determined by Phases 2 and 3. We may need data that comes from various sources. Arguably, the tool used to obtain data may need to be adjusted for every source separately. Eventually, we will need to integrate all the developed building blocks with the existing protocol. This could take a substantial amount of time and also depends on the choice of the building block. Finally, if everything goes according to the plan, we should debate Phase 4 at the beginning of 2024. Since the ecosystem is moving at a fast pace, it is difficult to tell now what implementations will be available at that moment.

5 Likes

Thanks for having us at the community call! Great Q&A session. It’s great to see such an engagement from the community.
For those who missed our presentation of Phase 1 results, here is the link
Node Operator Community Call #3 - YouTube

6 Likes

Thanks for the detailed updates.

This looks like something that should be funded by multiple projects compared to Lido being the sole carrier of such research.

What if we get the StarkWare situation? Mid-development of the client it stopped and went back to R&D? (I’m asking as you’re the execution on the R&D side).

There are so many unknowns on this and I’ll quote you on: “Since the ecosystem is moving at a fast pace, it is difficult to tell now what implementations will be available at that moment.”.

It’s what worries us the most. Treasury burn is substantial to have so many maybes. This amount of capital can support multiple Lido expansions and provide decentralization of new networks.

To conclude risk is higher than reward as it can affect multiple streams severely with no clear outcomes. We need to be responsible with protocol funds at all times. In the bear market especially.

6 Likes

Agreed. I’m very sympathetic to how large of a project this is. But the solution must be the break this up into multiple smaller projects. Or, to look for a solution which gets us closer to our goal but which can be researched and implemented more immediately.

I’m also concerned about the time length of the project. Withdrawals are upcoming shortly, and we should not expect that competitors will be stagnant. Lido is way ahead of other protocols in terms of the number of validators we are running, but capital can easily exit and flow elsewhere. We need solutions more immediately than this would provide. Had I been active in the community prior to the first Nethermind proposal on this project, I would have been against it for the same reasons.

While it’s true that until withdrawals are possible using only withdrawal credentials we are limited in scope, there are still potential solutions that get us much closer to our goal that could be considered. There are tradeoffs there, of course, but it makes more sense to me to first rule out simpler, more manageable solutions that while not perfect, could keep us moving forward rather than being frozen for the next year.

Let’s not put all of our eggs into one basket, even if this approach might very well be the best long term solution. But, it may not be and it doesn’t seem prudent to put more money and time solely into this effort, when the outcome is so opaque.

2 Likes

Also an important thing for this, who are the actual researchers?

Their backgrounds and experience with list of previous research publications done by them (not the Nethermind in general). Specially if something great was built on their research would help the community here to understand the value for the money.

To remain successful, Lido should become more permissionless and invite more operators. However, that opening is with risks, Sybils and white-labeling being one of the most prominent. We took a look at Lido’s self-assessed scorecard (https://lido.fi/scorecard) and noted the following:

  • There’s a way for new operators to enter the set and prove themselves. Current score: Needs improvement. Lido is actively researching ways to allow permissionless operators to join its validator set, including working with SSV Network and Obol on DVT, as well as exploring ways for solo stakers to participate in the protocol.

However, onboarding operators permissionlessly is tricky. Or, more precisely, it is tricky to permissionless onboard the operators and

  • Have good scores in the following (same scorecard):
    • Operators run their own nodes (no white-labeling) (currently: Good)
    • No operators with over 1% of total stake of Ethereum through Lido (currently: Okay)
    • Distributed geographically and jurisdictionally (currently: Okay)
    • Distributed variation of on-premise infra and different cloud providers (currently: Okay)
    • Client Diversity (currently: Okay)
  • To make sure that the operators make together a good-quality set of operators (cf. https://hackmd.io/K6udDz1nSZOoX8t-vE98qg )
  • To fulfill “Increase Ethereum’s technical, geographical, and jurisdictional resilience” (cf. https://research.lido.fi/t/lido-on-ethereum-community-validation-manifesto/3331)

Given all the above, I’m afraid I have to disagree that the risks are higher than the rewards.

Regarding the Starkware situation. We (as a company, it was a different team) identified the issue and solved it. The RnD in that project was really “D”, and that went back to “D”. Our team showed the ability to perform proper research, e.g., in https://research.lido.fi/t/a-proposal-for-partnering-with-nethermind-to-design-a-mechanism-for-a-good-validator-set-maintenance/3000 not to mention other projects run for clients and organizations like EF. (I will introduce the team shortly in a separate entry.)

1 Like

I totally understand the urge to break the project into smaller chunks, where a piece is researched and then implemented. However, this approach also has its own drawbacks. The first is — we need to clarify what pieces we need and how they need to play with each other in a bigger system. It is more cost-efficient to put the implementation to the end of the project, when we know the needed blocks.
I totally agree on: “Withdrawals are upcoming shortly, and we should not expect that competitors will be stagnant. Lido is way ahead of other protocols in terms of the number of validators we are running, but capital can easily exit and flow elsewhere.” And this project aims to increase Lido’s competitiveness in the liquid staking market.
Regarding “there are still potential solutions that get us much closer to our goal that could be considered. There are tradeoffs there, of course, but it makes more sense to me to first rule out simpler, more manageable solutions that while not perfect, could keep us moving forward rather than being frozen for the next year.” We are always happy to discuss alternative solutions!

Thanks for the answer and updates.
Congrats on freshly solving the StarkWare situation, seen some tweets yesterday. Never thought it was the same team. Was curious what if such a delay happens to us or research gets to be unuseful?

Anyway, we completely understand the need for a permissionless set and why it should happen as soon as humanly possible.

Looking forward to the team introduction and to the work done as it should help us understand the gigantic ask.

I propose Lido forms a committee of experts on this topic, get 2 additional quotes for the same, and the community to choose who should do the research compared to picking up the first offer as the research funding is substantial.

With the quotes and team research backgrounds, Lido voters should be able to decide the right direction here and the best value for the money based on skill, history of deliverables, cost, or any other factor that the expert committee would put in place.

In general I agree with the concerns that a) this is a lot of funding and success of delivery is very difficult to ascertain without also knowing what comes after, b) it’s a big effort and can be probably broken down into smaller pieces. It’s probably a good idea to see if some work can be done to either pare the scope or identify areas of focus where more time should be spent than others in an attempt to reduce costs and possibly even time spent.

However I feel like we’re missing the forest for the trees with some of these lines of questioning (and while the amount of funding is large, it pales in comparison to things like funding which has been spent on liquidity incentives and time and resources on other initiatives, for arguably much less value). The question here should be pretty simple: How much does the DAO want to spend on strategic research that would allow the protocol to grow ~2-2.5x and operate in a sustainable and safe manner? If the research itself doesn’t yield the answer it would likely still yield valuable answers (e.g. maybe it’s not doable, maybe it’s doable but the concessions are not acceptable, maybe it’s doable but needs X,Y,Z).

This is obviously desirable but it’s not really realistic. There are not a lot of other staking solutions at the scale of Lido and those which are are centralized and don’t really care about these things. If the DAO doesn’t make an investment in doing leading research and strategic initiatives with deep horizons then it’ll either meet a limit or be surpassed by competitors that do not have to operate in a trustless and permissionless manner.

There are other teams that may be interested, or might be interested soon (e.g. there is a lot of overlap between some of this work and work that teams developing robust DVT infra are thinking about) so there’s opportunities for cost sharing and synergies, and those should be duly explored as they come up.

Identifying a team that had the requisite knowledge and capacity do this work to begin with took weeks, and working with the team to make sure that the research plan was appropriately thought out took even longer. I fear a pause to look for co-funding (especially in the middle of a bear) will probably take just as long if not longer, and that’s not really the kind of time that can be spared given how fast this industry and market move.

Expansions is an odd comparison to make here because they’ve largely all been money pits with little value returned to the protocol. Part of the problem in doing this kind of comparison analysis is that there’s only been one bull → bear cycle and it’s not possible to fully gauge whether these expansion initiatives have been successful or not, and that problem of timeframe applies here too. There’s a large upfront cost with many unknowns and success that can only be assessed at a much later time.

I think this calculus is wrong. The risk of doing nothing or waiting longer to do something is greater than the risk of doing something costly but potentially ultimately directly unfruitful. This research is basically the bedrock that will determine whether Lido can appropriately scale to the potential that a lot of us are envisioning in a safe and secure manner.

I don’t agree with the dichotomy here. If there are simpler solutions that can help us bridge the gap then that’s great and they should be analyzed and considered, but they can be done at the same time as longer-range research work. From the solutions that I’ve discussed with others, all make tradeoffs that may work at small scale or for the short term (next 6-12 months), but are not robust enough to allow for the evolution of the protocol that this research is meant to enable.

IMO if we take that view and then end up doing nothing then the protocol has a shelf-life of 1-2 more years tops and then it either gets vampired by a new competitor or just steadily loses share and becomes a middle of the pack solution vs a leader.

6 Likes

Historically speaking, yes, there were large spends on multiple fronts that are less important than this topic. However, one wrong does not justify flying into another.
(Not saying this research is wrong for lido, referencing the approach to it.)

With all the respect if this deliverable as a complete project can’t be done before 2024(ETA unknown), what are few weeks to add on top of it?

I would not grab “expansions” as being a comparison why should we not do this but as a one of many things that can be funded. Old ones were done without full BD team or Finance team. There are models that do not create sinks but that topic was not the target of this reference. It really did not serve as an importance meter either.

Each time we remove a big chunk of DAI from the treasury pushes us to various outcomes that we can’t predict or control. Eg. another round of funding for runway or selling other tokens from the treasury for solvency.
@steakhouse brought a lot of education about the importance of each individual asset to the DAO which impacted multiple streams in a positive manner. We should use that knowledge wisely.

Decisions made under heavy time pressure are usually wrong.

Let’s agree to disagree on this as we’re looking it from opposite directions. BD/Finance take takes different factors to provide comments vs. NOM team.

If we’re talking realistically for 1.45m DAI for pre-development state Lido can build in-house team of researchers. So without more data shared, it still looks like this only benefits Nethermind.

P.S. I don’t see this line of questioning that will provide answers to “who exactly will conduct this and if price can be lower” is something bad. It’s our future in question, and it’s our duty as DAO contributors to provide facts for the wider community to have transparent insight.

In the end, LDO holders are the ones who will decide based on those facts and throw weight where they see it fit.

My point here is mainly that if the finance team is uncomfortable with the spend based not just on the opaqueness, but based on knowing precisely what we will get for that investment, that we need to ensure that other solutions are funded, even if they are more short-term.

If we only swing for the fences, we have a reasonable likelihood of ending up in a position where a lot of time and money has been spent, to move very little, while the competition does.

My concern is exactly the same as yours @Izzy, that if we don’t continue to move, the protocol “has a shelf-life of 1-2 more years tops” and so moving forwards is critical. A swing for the fences approach should not be the only thing we’re relying on, which I don’t think it is, but that may not be clear to everyone reading this proposal.

Big projects like this have a high likelihood of failure, but a huge payoff for success. Whereas smaller projects allow us to fail faster, iterate, and continue to take smaller steps towards our end goals.

To me, the most important thing is that 6-12 months from now, we are much further forward than we are today. I don’t think this proposal alone gets us there. So, we need to be taking, and if necessary funding, other approaches which get us movement in a shorter time frame to stay ahead of the competition and ensure that we actually have the funds necessary to take these bigger, swing for the fences approaches, which really are critical.

At the same time, it’s not possible for us to get to where we need to be long-term using only incremental steps. At some point, you must fund bigger, riskier, research if you want to overcome large problems and move the protocol further ahead of the competition, which is what this proposal is trying to achieve.

Both approaches to research investment can work, and must work together in harmony, and it’s important to decide when to invest in one, the other, or both simultaneously.

1 Like

Appreciate of what Nethermind team has done and proposes.

I think an automation scoring system should have a higher priority in the short/mid term. Although the system is expected to re-utilize some blocks developed in Phase 2, we still could improve the VA set quality by having a version including data mentioned above in the early stage and iterate the version with more outcomes from phase 2.
So, could we achieve some tasks from Phase 3 before phase 2 first?

DVT solutions should be interested in as well.

1 Like

I think looking at historic examples such as constant small improvements, additions of new NOs, DVT integration pilots in testnet, this research itself (i.e. how early it was thought about and started), the staking router design, dual governance design and discussions all point to the DAO and contributors thinking about various paths to both steady improvements and ultimately full protocol maturation.

This is the great thing about Staking Router – any interested party will be able to come up with modules to implement their vision of attaching new sources of validators to the Lido protocol that the DAO can then assess and potentially integrate.

The one thing I would add as a word of caution here is getting too lax about doing some small things in the interim just to have something to show, as ultimately things that may go wrong but have low financial impact due to the size of the protocol (e.g. penalties or even slashing) can have very large and potentially devastating reputational impact.

3 Likes

Hey @mpzajac
Thank you for the detailed proposal.
It piqued my interest personally in the sense that I am looking into the possibility of writing an SoK-type paper re:approaches to oracles, token-curated assets, prediction markets, Sybil, and white-labeling-protection mechanisms. I wanted to first cross check with you if it is okay for me to do so. Is this okay?

Team members (in alphabetical order):

Ahmet Ramazan Agirtas: Ph.D. candidate in Cryptography. Studying cryptographic protocols since 2018. The main area of specialization is digital signature schemes and their applications. A consultant on Technical Due Diligence requests for projects in the Ethereum ecosystem, delivered to external clients of Nethermind. The CV can be found here.

Aikaterini-Panagiota Stouka: Ph.D. in Computer Science at the University of Edinburgh. Thesis title: “Incentives in Blockchain Protocols”. Bachelor in Mathematics and MSc. in Computer Science. Over 8 years of experience in blockchain research. Experience as a Research Associate at the Blockchain Technology Laboratory at the University of Edinburgh and by collaborating with the Input-output global company on designing the reward mechanism that was implemented in the “Shelley update”’ on the Cardano blockchain platform. List of publications: https://dblp.org/pid/184/9142.html.

Albert Garreta. Ph.D. in mathematics and computer science. Over 6 years of experience in academic research on algorithmic problems in algebra. Gold medalist of Kaggle’s machine learning competition PLAsTiCC (9th solution out of more than 1000). 2nd place solution at StarkNet’s Amsterdam hackathon (2022). Contributor to the creation and improvement of Cairo’s math libraries (see here and here). The CV can be found here.

Ignacio Manzur: MAst in Pure Mathematics at the University of Cambridge, with a focus on Geometry and Number Theory. Currently working in fundamental cryptography research, with an emphasis on zero-knowledge proofs and their security. Also, a consultant on Technical Due Diligence requests for projects in the Ethereum ecosystem, delivered to external clients of Nethermind.

Isaac Villalobos Gutierrez: Experienced in C++ and Rust development. Contribute development of Vampire, a novel zkSNARK among others. Currently, I am pursuing a Master’s degree in Electrical Engineering and Computer Science in collaboration with the Autonomous Robots and Cognitive Systems Laboratory. Additionally, I am a consultant for technical due diligence requests for Ethereum ecosystem projects at Nethermind, delivering these services to external clients of the company.

Jorge Arce Garro: Ph.D. candidate in Applied Mathematics at the University of Michigan, with research experience in quantum computing, control theory, and machine learning. Also, a consultant on Technical Due Diligence requests for projects in the Ethereum ecosystem, delivered to external clients of Nethermind.

Michał Zając (team lead). Ph.D. in Computer Science. Over 10 years in cryptography research — both in academia and industry. Doing blockchain-related research since 2018 with focus on user privacy. Main area of specialization: zero-knowledge proofs, their applications, and security. Co-author of Vampire — an updatable and universal zkSNARK with the shortest proof. List of publications: https://dblp.org/pid/02/6977.html, CV: https://github.com/mpzajac/about_me/blob/main/cv_2023.pdf

Yevgeny Zaytman: Ph.D. in Mathematics at Harvard, with a focus on Number Theory and Algebraic Geometry. Over 10 years working at the Center for Communications Research. Contributor to the improvement of Cairo’s math libraries (see here).

4 Likes

Hi @spilehchiha. Sure! Feel free to reach out to us to discuss the papers and approaches.

1 Like