On the topic Ethereum censorability monitor a few projects managed to track the degradation of service that censored tx’s face when trying to be included in a block. The top project awarded (Neutrality Watch) identified some Lido Node Validators that seems to heavily censor non-OFAC compliant transactions (for example: ParaFi Technologies and Figment).
I couldn’t find any follow-up action to address this censorship problem on Lido’s part. Considering “Lido DAO’s purpose is to keep Ethereum decentralized, accessible to all, and resistant to censorship” (Lido on Ethereum Scorecard) I think it would make sense to penalize censoring NO’s. Maybe deactivating them temporarily by calling deactivateNodeOperator() on the contract NodeOperatorsRegistry would work. For example: The oracle could use a Neutrality Watch API to see who’s censoring and pass the penalty using a new itemtype 3 “EXTRA_DATA_TYPE_CENSORING_NOS” of the extra data of struct ReportData in contract AccountingOracle. Then Lido would enforce the penalty.
This is just a quick idea of how the penalization could be implemented and would involve some modifications in a few contracts but considering Lido’s purpose it makes sense to pursue this.
The issue of censorship is a complicated one. In general as per the research thread that you indicated, a few different approaches (e.g. neutrality watch) and Nero’s https://censorship.pics/ have emerged that do a good job of tracking censorship on the network from a “possible impact” perspective (i.e. real transactions impacted in a “time to land on chain” fashion). There was also a great analysis on the ultra sound relay site for a while that examined estimated actual delay (i.e. how long a potentially filtered transaction would stay “unpicked up”), unfortunately that analysis isn’t provided anymore (I understand it was too costly to maintain). In all these analyses, the operators of the Lido protocol as a whole generally outperform the avg of the network from a censorship resistance perspective. In general, most “potentially filterable” transactions make it onto the network within something like 4-10 blocks, which is still really quick and IMO not worthy of concern of attempting to create over-engineered mechanisms to address this immediately. Other solutions (e.g. ILs) are more robust and sustainable ways of doing so, and we (as a community) should be focused on supporting that somehow.
In general, I don’t think it makes sense to penalize NOs who filter, as they may be doing so as a means to comply with their interpretation of local legal and regulatory requirements. After all, if the network (Ethereum) wanted to do something like this it should do it at the base level (which is really the only level that this kind of thing can really be reasoned about with a good level of technical comfort). Instead, the community as a whole is moving towards doing things like keeping the idea of allowing anyone to participate (i.e. NOT going in the direction of disallowing those who wish to filter certain transactions) and rather have additional mechanisms, like inclusion lists, to safeguard censorship resistance.
Given that, my opinion is that the protocol should be inclusive in the sense of allow for usage by operators from across the globe, and so I think allowances should be made for how NOs may need to operate locally and not explicitly penalize operators – BUT, it may make sense to, for example, prioritize to direct stake (or at least a portion of it) to operators who don’t engage in filtering practices. Such kinds kinds of improvements to the Curated Module (or modules in general) can definitely be considered. On the other hand, advancements such as Inclusion Lists may render such a mechanism moot, so it’s to be determined to what extent it would really be required or not, and what makes most sense from a cost/benefit perspective of resource allocation.
On the specific suggestion: I think it’s very technically complex and difficult to use off-chain oracle data for things like this. The analysis must be unassailable, clear, reproducible, and the “supply-chain” for the transfer of this data on-chain and then its usage by the protocol must be very very robust (especially if we’re talking about disabling node operators or re-allocating huge portions of stake). Unfortunately, that’s really not the case here. Even with relatively simple things like consensus layer performance there are enormous technical challenges in being able to accurately ferry this data to the EL for usage, and it’s very expensive to do so.
So, tl;dr:
Is there a pronounced CR problem right now on the network? IMO no.
1a) How does Lido as a whole perform w.r.t. CR compared to other large staking solutions and the network as a whole? Really pretty good (check censorship.pics → Censorship-Meter - Validators, select “Validators” button)
Is it better to focus time/energy on a network-wide solution or trying to hook up the protocol to a complex off-chain mechanism for identifying CR and giving it explicit protocol levers? IMO the latter only makes sense if the answer to (1) changes dramatically in a short period of time, otherwise it’s best to focus on things like ILs and to support the Lido protocol and the node operators who use it in its usage of them to advance the CR properties of the network.