A proposal for partnering with Nethermind to design a mechanism for good validator set maintenance. Phase 2

Appreciate of what Nethermind team has done and proposes.

I think an automation scoring system should have a higher priority in the short/mid term. Although the system is expected to re-utilize some blocks developed in Phase 2, we still could improve the VA set quality by having a version including data mentioned above in the early stage and iterate the version with more outcomes from phase 2.
So, could we achieve some tasks from Phase 3 before phase 2 first?

DVT solutions should be interested in as well.

1 Like

I think looking at historic examples such as constant small improvements, additions of new NOs, DVT integration pilots in testnet, this research itself (i.e. how early it was thought about and started), the staking router design, dual governance design and discussions all point to the DAO and contributors thinking about various paths to both steady improvements and ultimately full protocol maturation.

This is the great thing about Staking Router – any interested party will be able to come up with modules to implement their vision of attaching new sources of validators to the Lido protocol that the DAO can then assess and potentially integrate.

The one thing I would add as a word of caution here is getting too lax about doing some small things in the interim just to have something to show, as ultimately things that may go wrong but have low financial impact due to the size of the protocol (e.g. penalties or even slashing) can have very large and potentially devastating reputational impact.


Hey @mpzajac
Thank you for the detailed proposal.
It piqued my interest personally in the sense that I am looking into the possibility of writing an SoK-type paper re:approaches to oracles, token-curated assets, prediction markets, Sybil, and white-labeling-protection mechanisms. I wanted to first cross check with you if it is okay for me to do so. Is this okay?

Team members (in alphabetical order):

Ahmet Ramazan Agirtas: Ph.D. candidate in Cryptography. Studying cryptographic protocols since 2018. The main area of specialization is digital signature schemes and their applications. A consultant on Technical Due Diligence requests for projects in the Ethereum ecosystem, delivered to external clients of Nethermind. The CV can be found here.

Aikaterini-Panagiota Stouka: Ph.D. in Computer Science at the University of Edinburgh. Thesis title: “Incentives in Blockchain Protocols”. Bachelor in Mathematics and MSc. in Computer Science. Over 8 years of experience in blockchain research. Experience as a Research Associate at the Blockchain Technology Laboratory at the University of Edinburgh and by collaborating with the Input-output global company on designing the reward mechanism that was implemented in the “Shelley update”’ on the Cardano blockchain platform. List of publications: https://dblp.org/pid/184/9142.html.

Albert Garreta. Ph.D. in mathematics and computer science. Over 6 years of experience in academic research on algorithmic problems in algebra. Gold medalist of Kaggle’s machine learning competition PLAsTiCC (9th solution out of more than 1000). 2nd place solution at StarkNet’s Amsterdam hackathon (2022). Contributor to the creation and improvement of Cairo’s math libraries (see here and here). The CV can be found here.

Ignacio Manzur: MAst in Pure Mathematics at the University of Cambridge, with a focus on Geometry and Number Theory. Currently working in fundamental cryptography research, with an emphasis on zero-knowledge proofs and their security. Also, a consultant on Technical Due Diligence requests for projects in the Ethereum ecosystem, delivered to external clients of Nethermind.

Isaac Villalobos Gutierrez: Experienced in C++ and Rust development. Contribute development of Vampire, a novel zkSNARK among others. Currently, I am pursuing a Master’s degree in Electrical Engineering and Computer Science in collaboration with the Autonomous Robots and Cognitive Systems Laboratory. Additionally, I am a consultant for technical due diligence requests for Ethereum ecosystem projects at Nethermind, delivering these services to external clients of the company.

Jorge Arce Garro: Ph.D. candidate in Applied Mathematics at the University of Michigan, with research experience in quantum computing, control theory, and machine learning. Also, a consultant on Technical Due Diligence requests for projects in the Ethereum ecosystem, delivered to external clients of Nethermind.

Michał Zając (team lead). Ph.D. in Computer Science. Over 10 years in cryptography research — both in academia and industry. Doing blockchain-related research since 2018 with focus on user privacy. Main area of specialization: zero-knowledge proofs, their applications, and security. Co-author of Vampire — an updatable and universal zkSNARK with the shortest proof. List of publications: https://dblp.org/pid/02/6977.html, CV: https://github.com/mpzajac/about_me/blob/main/cv_2023.pdf

Yevgeny Zaytman: Ph.D. in Mathematics at Harvard, with a focus on Number Theory and Algebraic Geometry. Over 10 years working at the Center for Communications Research. Contributor to the improvement of Cairo’s math libraries (see here).


Hi @spilehchiha. Sure! Feel free to reach out to us to discuss the papers and approaches.

1 Like

Greatly appreciate all the discussion and the Nethermind team’s help in adding more context to support this proposal.

Specifically, with regards to permissionless NO onboarding, there are multiple competitors in the pipeline coming online in the coming months with various approaches to validator onboarding (e.g. Staden with a SR-like model, Swell with a permissioned → permissionless roadmap and others). Lido needs to be able to develop a comprehensive response to the problem of permissionless onboarding and especially maintaining a robust validator set without waiting for the field of new competitors to settle on a robust solution.

If Lido wants to make meaningful and measurable improvements in executing on a roadmap for decentralization, permissionless onboarding and developing reliable metrics to evaluate operator and validator sets is clearly a crucial element to get right.

Generally agree with the perspective that multiple approaches to building technical know-how are desirable, but would also caution against dismissing one route to building this expertise if no others are available. There’s an element of risk sequencing that the team has made an effort to put together for the DAO with multiple points of exit for token holders, if they believe the risk is too great to continue with the research.

Given the importance of this technical research for the protocol, it seems like a worthwhile investment, particularly with a clearer line of sight to Phase III and beyond.


Hey @mpzajac; Thank you so much for this; I am very excited to be working on this!
My collaborators are Jeremy Clark and Shayan Eskandari.
Is there a more dedicated channel for us to discuss possible collaboration with you and papers / approaches?

1 Like

Hello DAO!

Thanks for all your feedback, it gave us better understanding of DAO’s needs and requirements. Please see below an updated proposal.

A proposal for partnering with Nethermind to design a mechanism for a good validator set maintenance. Phase II.


This a proposal to fund Nethermind to design white-labeling resistance mechanism. The delivery will include a detailed discussion of the final design and a systematization of knowledge for white-labeling prevention, oracle systems, prediction markets, and token-curated assets. During the project, a team will investigate what the state-of-the-art is, what solutions are or will be used in practice, and how. Then we will propose concrete mechanisms to make Lido’s network white-labeling resistant. The project will be one of the steps toward enabling Lido to onboard new operators in a permissionless manner. This is a continuation of our previous proposal. The project will take 22 weeks, and its cost — 450 000 DAI — will be covered by Lido DAO.

Proposer and work mindset


Michał Zając on behalf of Nethermind.

About Nethermind

Nethermind is a team of world-class builders and researchers with expertise across several domains: Ethereum Protocol Engineering, Cryptography Research, Layer-2s, Decentralized Finance (DeFi), Miner Extractable Value (MEV), Smart Contract Development, Security Auditing, and Formal Verification, among others.

Our Research team comprises computer scientists, mathematicians and engineers who work on analyzing, breaking, and designing blockchain and cryptographic schemes. Our expertise and interests span the fields of zero-knowledge proofs, non-deterministic programming, Distributed Validator Technology, and decentralized identity.

Working to solve some of the most challenging problems in the blockchain space, we frequently collaborate with renowned companies and DAOs, such as Ethereum Foundation, StarkWare, Lido Finance, Gnosis Chain, Aave, Flashbots, xDai, Open Zeppelin, Forta Protocol, Energy Web, POA Network and many more. We actively contribute to Ethereum core research and development, EIPs, and network upgrades with the Ethereum Foundation and other client teams.

General work mindset

The following principles will drive the development of the protocols:

  • All the design considerations and risk analysis will be done with the consent of the Lido DAO.
  • All proposed solutions will come with security analysis. When available, the protocols’ security will be proven.
  • Milestones and deliverables will be small to ensure a good overview of the team’s progress.


  • Operator: A party that runs, or participates in running, one or many Ethereum validators. Operators, solely or jointly, have access to the signing keys of one or more validators but do not necessarily have control of the corresponding withdrawal credentials. Operators can control multiple nodes.
  • Node: A virtual sub-party (a piece of hardware and software) controlled by an operator that performs the operator’s jobs w.r.t. a concrete validator. When an operator is a party that may control multiple validators, a node is a representation of a concrete validator.
  • White-label operators: If a party, who was onboarded as an operator, delegates the operation of a node to another party, we call the latter a white-label operator.
  • Sybil operator: We call a party Sybil if it controls two or more operators behind the scenes. A Sybil-protection mechanism is a set of countermeasures that makes it difficult for a party to have two (or more) operators onboarded such that the protocol is unaware they are colluding.
  • Protocol score (or score): protocol’s internal scoring system that measures whether the operator contributes to the good quality of the set of operators.
  • External reputation: operator’s reputation in ecosystems external to Lido, e.g., in real life, in Web2 services, in other Web3 services, etc.
  • Arbiter protocol: we call a protocol “arbiter protocol” if it is either a decentralized oracle, token-curated asset, or prediction market.

Ideal mechanism overview

An ideal mechanism evaluates Lido’s DAO validator set according to the operator & validator set strategy described in this note by Lido. More precisely, the mechanism must have methods for improving the validator set if there is an option to do so. It must have zero input from permissioned roles (i.e., no admins/committees). Furthermore, it must have an input of low to zero impact from LDO, stETH, and ETH token holders.

The mechanism has to be capital efficient: Collateral for operators can be used, but it can’t be the single or primary mechanism; it has to function mainly by staking with other people’s money.

The mechanism has to account for the bull-bear cycle effect in a way that would allow operators to stop validating if it becomes too expensive. Additionally, the mechanism has to reduce the number of operators in bear markets and expand in bull markets.

The mechanism has to prevent the set of operators from becoming worse. This includes, but is not limited to, avoiding the following:

  • reduced performance,
  • reduced neutrality,
  • offline time,
  • slashable offenses,
  • reduced jurisdictional geodiversity,
  • reduced localization diversity of the infrastructure,
  • reduced Ethereum client diversity and other diversity vectors,
  • giving up independence (e.g., in a merger),
  • destructive MEV,
  • delegation of operation (delegating operator duties should reduce the amount of stake that the operator can control, potentially removing it from the operator set altogether).

Improving operational quality should increase an operator’s revenue (by increasing the stake or the commission).

The stake should be distributed flat-ish. Operators should only control up to 1% of total ETH staked through Lido.

The mechanism cannot overfit on any particular parameter, but most importantly, it cannot overfit on performance: super-performant operators often cut corners or sacrifice specific attributes for others. Furthermore, overfitting performance and profitability is inherently centralizing due to economies of scale and, in general, cost-minimizing (i.e., locating in places w/ the cheapest servers, bandwidth, etc.). That being said, the mechanism has to ensure an overall good level of performance

The mechanism should allow for a new operator to enter the set of operators with essentially no collateral or reputation and work its way to an optimal position within the network of operators. That should be possible, although it may take a long time, if the operator has a “good enough” performance and is ecosystem aligned, independent, and runs its hardware in non-concentrated geographical/jurisdictional areas. There might be a need for an insurance pool or collateral to enter at zero or to rise to the top, but it could be optional in the middle.

The amount of stake controlled by an operator should depend on a "protocol score.” This score should reflect how much the operator contributes to having a good overall validator set. In particular, an operator joining the protocol should be given a low to neutral score, implying that it can control a very limited stake. The score, and thus the amount of the controlled stake, should be increased when contributing to some or all of the following. We note that the exact scoring mechanism is yet to be researched, so the list below is provisional.

  1. Providing additional bond.
  2. Providing good quality services. Users can build their reputation (and thus score) by providing good services. Defining what “good services” means will be part of Phase 3.
  3. Providing information about itself, for example, revealing its Web2/Web3/real-life identity or other credentials such as educational institution diplomas, GitHub activity, hackathon awards, etc. Ideally, this information will be provided in a privacy-preserving yet verifiable manner.

An operator that provides its identity has more to lose than just bond when it misbehaves. Its external reputation is at stake. Additionally, an operator that (verifiably) reveals technical knowledge credentials is more likely to operate its validators properly. In some scenarios, this could increase the Sybil resistance of the network (though this increase is certainly limited, and a complete Sybil resistance solution would need to rely on further mechanisms).

Notably, users who want to remain anonymous would still be able to control a substantial stake by providing bond (Point 1) or gaining a reputation by providing good quality services (Point 2).

General objectives

We will assist Lido in creating and maintaining a permissionless and high-quality validator set mechanism. This entails:

  1. Designing and implementing methods to ensure that validators are run by a high-quality set of operators. In particular, each operator performs its duties on its own and does not cede them to an external party (i.e., it does not hire a white-label operator), nor is secretly associated with other operators, and ensures that its hardware and software run performantly.
  2. Conducting a thorough economic analysis to understand how the market fluctuations, or changes in the Ethereum protocol itself, can compromise the system’s security.

The project will be divided into following phases:

  • Phase 1: We survey the literature and state-of-the-art approaches to identity and attestation schemes. This phase has been completed already.
  • Phase 2: During this phase, we will survey the literature and state-of-the-art approaches to white-labeling-protection. We will focus on arbiter protocols: oracles and prediction markets, to assess they usability in this problem. Then we will propose a mechanism that utilize an arbiter protocol to fight white-labeling. We will also investigate how to disincentivize using white labeling operators. The present proposal focuses solely on this phase.
  • Phase 2.5: We will make a proof-of-concept implementation for the white-labeling prevention mechanism designed in Phase 2.
  • Phase 3: During this phase we will work on the Sybil resistance. We will prepare a survey on the state-of-the-art approaches and propose a mechanism that significantly increases the cost of creating Sybils.
  • Phase 3.5 We will make a proof-of-concept implementation for the Sybil prevention mechanism designed in Phase 3.
  • Phase 4: Next, we will proceed to design solutions for assuring a good quality set of operators and economic security of the protocol. We will also describe the resources required to implement the solutions proposed in Phases 1 to 4.
  • Phase 5: This phase is mainly concerned with implementing the solutions designed during the previous phases. We will also research some extra topics and problems, as done in the previous phases, and afterward, we will implement them. Further information on this phase will be provided later, by the end of Phase 3.

Project Objective

Phase 2. White-labeling resistance mechanism design

In this part of the project, we will focus on one of the crucial aspects of the security of a permissionless staking protocol. Namely ensuring that operators perform their duties independently and don’t use third parties, so-called white-label operators, to do them on their behalf. An entity that runs multiple operators (whether by controlling Sybils or being a white-label operator) could have too much control over stake, and the protocol in general. This would worsen the overall health of the protocol, weaken its resistance against correlated slashing, and could introduce a single point of failure.

We emphasize that even if we ensure that all onboarding operators are honest, we still need to have a system that detects dishonest parties within the set of already onboarded operators. This is because white-label operators can be created among the onboarded operators, even if these operators honestly entered the system. The latter may happen, e.g., when one entity that runs operators buys another that also runs operators. In that case, the buying party may end up controlling too much of the stake.

Our work plan for developing the white-labeling resistance will begin by researching several techniques that, we believe, have the potential to lead to a solution in isolation or as part of an amalgam. The different types of methods we will explore are listed below. In particular, we will investigate arbiter protocols. The motivation for using arbiter protocols is following — a party that suspects that some operator is a white label could raise that issue and make a corresponding prediction market where people can bet on whether they believe in the claim. Then the conflict would be resolved by an assigned resolution mechanism.

The final goal of our work will be to produce a report explaining exactly how the Lido network can use such methods.

TASK 1 White-labeling resistance. SoK.

We will begin with supplementing the SoK from Phase 1 with an SoK for white-labeling resistance. We will look for such mechanisms used not only in Web3 but also in Web2 and, if necessary, real life.

This part will take 3 weeks.

TASK 2 Arbiter protocols. SoK.

We expect some core components to be decentralized oracles, token-curated assets, and prediction markets (with the focus on the last one). We will use these "arbiter protocols,” as we call them, to assess whether

  • a prospective operator will provide good quality services and contribute to the quality of the operators’ set or
  • an already onboarded operator uses a white-label operator.

As a first step, we will prepare an SoK for the following topics:

  • decentralized oracles
  • prediction markets

This part will take 3 weeks.

TASK 3 Arbiter protocols. Resolution mechanisms.

In this part, we will design a mechanism that resolves disputes in arbiter protocols. We will propose concrete setups for decentralized oracles and prediction markets. We will specify which parties make the oracle and who resolves disputes in prediction markets. We will analyze the feasibility of using already onboarded operators as part of an oracle system. Similarly, we will also explore the idea of using LDO holders as a resolution mechanism for prediction markets and oracles. To protect operators from being unjustly accused, we will design a mechanism that allows such operators to raise a flag and notify the DAO of the resolution mechanism’s wrongdoing.

This part will take 8 weeks.

TASK 4 Heuristics against Sybils and white-labels

We will analyze the possibility of using heuristic mechanisms to detect Sybils and white labels by analyzing their behavior and setups. We will analyze the available literature on the topic and determine operators and the parameters that should be observed. Finally, we will draft a mechanism that, in conjunction with the arbiter protocol, could be used to detect Sybilling or white-labeling operator.
This part will take 4 weeks.

TASK 5 Do you trust your white-label?

We will investigate a mechanism that will require from parties using white-label operators to trust them with some funds. More precisely, in the mechanism the party joining Lido will need to put some collateral that can be transferred using validator’s key. Hence, the collateral could be efficiently stolen by the white-labeling party that keeps the key.

This part will take 2 weeks.

On implementation

This phase does not contain implementation. This is since at this stage of research it is impossible to tell what should be implemented, how, and how long would it take. We will propose a proof-of-concept implementation for one of the solutions designed in TASK 4 or 5. We will submit a separate proposal for the implementation effort as soon as possible.

This phase will be completed within 22 weeks from the date of the agreement.

This phase does not contain any implementation.

Organization, Funding, and Budget

The project will be funded by Lido DAO. The DAO will pay Nethermind 450 000 DAI, of which 200 000 DAI will be paid upfront and the remaining 250 000 DAI on delivery.

At the end of the project, the LEGO council will decide whether the provided systematization of knowledge meets the agreed requirements and, if that is the case, proceed with the payment. In the case of disagreement between the LEGO assessment on the quality of the deliverables and Nethermind, Lido operators will be used as a resolution mechanism.

The payment will be made to address eth:0x237DeE529A47750bEcdFa8A59a1D766e3e7B5F91

Next steps

We want to put this proposal to a vote in 7 days. The voting will remain open for 7 days.


The team’s presentation has truly exceeded our expectations in terms of quality and coherence. It is evident that new proposal makes far more sense, and we are grateful for the additional efforts made to refine it and cater to the specific needs and budget of Lido. Thank you for keeping us updated and ensuring that the proposal is accessible to a wider audience.


Snapshot vote started

We’re starting the A proposal for partnering with Nethermind to design a mechanism for a good validator set maintenance. Phase II. Snapshot, active till Tue, 07 Mar 2023 18:00:00 GMT . Please don’t forget to cast your vote!


Snapshot vote ended

Unfortunately, the A proposal for partnering with Nethermind to design a mechanism for a good validator set maintenance. Phase II. Snapshot hadn’t reached a quorum. :no_entry:
The results are:
For: 45.6M LDO
Against: 46 LDO

The Snapshot-vote failed to reach a quorum, but received no significant opposition. We plan to restart snapshot voting. The launch is scheduled for tomorrow, March 14th.
Please have your keys ready to vote :old_key:


Snapshot vote started

The A proposal for partnering with Nethermind to design a mechanism for a good validator set maintenance. Phase II (restart) Snapshot has started! Please cast your votes before Tue, 21 Mar 2023 19:00:00 GMT :pray:


Snapshot vote ended

Thank you all who participated in the A proposal for partnering with Nethermind to design a mechanism for a good validator set maintenance. Phase II (restart) Snapshot, the proposal passed! :pray:
The results are:
For: 51.7M LDO
Against: 58 LDO

1 Like

Yay, thank you for your votes!
Next step is EasyTrack motion to top up LEGO multisig wallet and a tx to Nethermind afterwards, links to be provided here, stay tuned.

Link to EasyTrack motion to top up LEGO multisig.


@mpzajac the 1st part of grant is disbursed, looking forward to seeing results of this research!

Link to the transaction:


Dear Lido DAO members and contributors,

We hope this post finds you well! On behalf of the Nethermind Research team, we would like to update you on the progress of our Phase II research project. We are moving into the final third of our efforts and would like to share what we have learned and the open questions we will finalize over the next several weeks.

Exploratory research

We started this project with many unknowns: about white labeling itself, but also about the tools that could be used to construct a resolution mechanism that would analyze and rule over cases of white labeling. Our first task was to analyze the literature and look for any analogies to our problem (Task I). We also set out to look for ideas, existing protocols, and research articles that would constitute an adequate resolution mechanism for Lido’s needs. (Task II).

On the first front, unfortunately, we found no relevant matches in the literature that could inspire an approach against white labeling. If anything, the existing body of research aims to facilitate the prospect of delegating computations to remote servers that have a larger computational power—a goal that is pretty much the opposite of having node operators run their own infra.

On the second front, we set out to study a variety of protocols that could be used to rule over a white-labeling dispute. Among these, we went over decentralized oracles, prediction markets, and decentralized justice—which we referred to in the proposal under the umbrella term of arbiter protocols. Here, we analyzed more than 50 papers referring to arbiter protocols and provided a comparative review of the preeminent approaches. Analyses of these, along with a database including a summary of the papers (similar to the one delivered in our Phase I research) are forthcoming and will be delivered with the research.

Amidst our analysis, decentralized justice approaches—also referred to as blockchain-based arbitration—came out on top as the best approach. With this term, we refer to a variety of protocols that “use blockchain technology to decentralize dispute resolution by crowdsourcing the adjudication of disputes to a worldwide pool of willing juror-arbitrators.” These jurors are financially incentivized to rule truthfully on questions—for example, the juror might need to stake in a court to participate.

Some reasons why we deem this the superior choice include:

  • Prediction markets, by themselves, do not provide resolution to the events that they allow investment in—a pattern that held true for all the projects we analyzed. Indeed, prediction markets require the outcome of the studied event to become self-apparent, or to have a trusted party interpret the outcome. The former is not true in the case of white-labeling disputes, and the latter goes against the ethos of progressive decentralization that this mechanism aims to uphold.
  • The oracle approaches we found have restrictions on the types of data that can be ported on-chain to provide a decentralized resolution. These tools shine when it comes to simply reporting on off-chain data (such as Chainlink), or attesting to information that sits behind a secure TLS connection (DECO/Town Crier). However, they are ill-equipped to provide interpretations of evidence coming in general styles and formats—a situation that should be commonplace with white-labeling disputes.
  • Decentralized justice approaches do not suffer from the weaknesses above. Compared to prediction markets, they leverage economic incentives to directly provide a resolution to a question or dispute, as opposed to social sentiment. Moreover, they can be thought of as a more general type of oracle that is able to provide an interpretation of a set of data, while incentivizing this interpretation to be fair via cryptoeconomic restrictions and rewards for a set of jurors.

Regarding decentralized justice approaches, we reviewed the existing and extinct alternatives in the market and have favored Kleros. The reasons include: being a fully permissionless solution, being the most mature protocol of this kind, and having an active and high-quality research team, with which we have had the pleasure of discussing this problem.

In light of the above, our next task was to use Kleros as a key primitive in a full-fledged dispute resolution mechanism. But, as they say, the devil is in the details.

Mechanism design

With the clarity achieved from the first two tasks, we moved on to the mechanism design (Task III). Proposing a Kleros↔Lido integration requires several degrees of fine-tuning, such as:

  • What are the expected types of white-labeling evidence that are expected to be seen in a Kleros court?
  • Given the above, what is the expected cost of running a Kleros court—i.e. what financial compensation is required for jurors?
  • Node operators should lock up some amount of capital to compensate court fees in case they are deemed liable. In such a case, part of this amount should also go to the parties making a successful accusation, so that they are incentivized to do so. What is the magnitude of this capital, and how should it scale as a function of the number of validators?
  • How are jurors to be onboarded to the platform? Should we check their expertise?
  • How can we estimate the effectiveness of our mechanism, to make sure that it causes prospective white labels to lose more money than they make?
  • In the case of a Kleros court malfunction, how can the Lido DAO regain control over a dishonest or malicious ruling?

We are currently finalizing the majority of these research questions, which will provide comprehensive guidelines for a mechanism setup.

In parallel, we have analyzed the role of heuristic approaches in detecting white labeling (Task IV). Since this tool is likely to lead to a “cat and mouse” game between white labels and the heuristic models, our approach here cannot be prescriptive, lest it become obsolete. Instead, our proposal for this deliverable is:

  • Making sure that the dispute resolution mechanism has the correct financial incentives to encourage external model builders to “hunt” these white labels (and get compensated for it)
  • Provide general guidelines on the types of validator features that could be observed to achieve such a task.

Finally, we had an additional deliverable “Do you trust your white label?”, where we intended to complicate the relationship of trust and delegation with white label operators (Task V). Intuitively, the goal was to “tie” access to the Lido rewards address with the validator key, in a way such that a party that holds the latter is able to have control over the former and steal funds. To this end, we initially attempted to adapt a proof-of-custody construction for the white labeling problem. This approach was too complex and ended up being unfruitful. We instead aim to provide simpler recommendations as to how this idea could be executed via some new functions in Lido smart contracts.

With this, we come to the end of our update. We still have several weeks of research to go before sharing the final report with the community, but we hope this provides a sneak peek into what we are working on and how we foresee the desired resolution mechanism. We will be back soon with more—stay tuned for a research-packed aftermath!


Dear Lido DAO members and contributors,

Week 22 is here, and with it, the delivery of our phase 2 research! Please find our deliverable in the following link:

White-labeling resistance: systematization of knowledge, research, and mechanism design

(Note that we are looking into a suitable format to back up this deliverable for the years to come and not rely solely on Notion. We will post it here when we have it)

On behalf of the Nethermind Research team, we thank all DAO members for their votes, their confidence, and the opportunity to conduct research at the cutting edge of Ethereum staking. We hope this work can act as a catalyst for the DAO’s vision of a more decentralized yet secure future, and we look forward to upcoming collaborations and research projects that keep Lido at the vanguard of liquid staking solutions.


Thank you for this work folks!


Really happy to wrap up phase 2. Immense thanks to the Nethermind team and the ever-growing participant group on these calls (including @ccitizen ) who have put in immensely valuable time and effort to produce really cool work in this space.