On October 3 2023, the Accounting Oracle report indicated that node operator Chorus One (id #3) had failed to timely process 31 exit requests which were signaled in a VEBO on September 28th. The untimely processing of these exits meant that these validators were automatically marked as “stuck” and the NO considered delinquent, and that the NO would receive half rewards until both a) the processing of the overdue exits, and b) the expiry of a cooldown period were completed.
Such events are handled automatically by the protocol as exits (if necessary) are re-routed to other operators so that withdrawals are not affected.
As per the stipulations in the LoE Validator Exits Policy, this post is to evidence the formal issue raised with the Node Operator. The Node Operator has also been separately contacted by NOM workstream contributors when the issue was identified.
The Node Operator was contacted and actions to remediate the situation were successfully taken on October 3rd. Chorus One will provide an analysis of the incident and remediation actions undertaken. The 31 exits in question were processed on October 3rd, and all following numerous exit requests since were also processed timely. The Node Operator has since resumed a status of in good standing and received full rewards as of today’s accounting oracle. The course of the incident and the penalties can be observed in the dashboard for LoE NO Rewards and Penalties.
Further, as per policy:
Due to the re-routing of validator exit requests, the DAO should consider (via an ad-hoc vote) overriding the total limit of active validators for the relevant Node Operator such that if/when they resume a status of in good standing, they are not benefiting at the expense of Node Operators who took over the processing of the re-routed exits requests.
What I see on-chain now:
- The NO has resumed a status of in good standing as of yesterday (the Accounting Oracle report for Oct 10, 2023 should show the operator receive full rewards following the expiry of the cooldown period)
- The NO is operating 9427 active validators (which are less validators than other NOs in the same cohort (e.g. P2P) who were at a similarly active number before this set of exits was signalled by the protocol. Due to the deterministic exits order (see code), Chorus One has essentially been prioritized (along with the oldest two waves of Node Operators) due to the “stake weight” prioritization mechanism, which will generally continue to be true until the operator has less than 1% of Lido stake allocated (they are still above that threshold).
Given the above, I do not see a strong reason to set a target limit at this time as a) the overdue exits were processed ASAP once the issue was noted, and the NO began processing exits again the following day and b) the NO is already prioritized for exits (unless there is an operator with TargetLimit ahead of them).