Ethereum validators monitoring: Rewards & Penalties


  • Reward - value in GWei, which is assigned to the honest, goodly performed validator for each duty
  • Penalty - value in GWei, which is assigned to the poorly performed validator for each missed duty except the block proposal
  • Missed reward - possible value in GWei that would be assigned to the validator if it had done its job perfectly.
    There are 3 cases when missed rewards can occur:
    • bad attestation (calculated as the difference between perfect attestation and the actual)
    • missed proposal (calculated as average proposal reward in epoch)
    • bad sync (calculated as the difference between perfect epoch sync and the actual)


Lido uses ethereum-validators-monitoring (aka balval, e-v-m, ‘turtle’ :turtle: ) as a main internal real-time monitoring and alerting tool for Ethereum validators. It helps Lido contributors and Node Operators get answers to their routine questions like: “How many attestations have been missed in N epochs by X node operator?” or “How well do Lido validators perform their duties in the sync committee compared to the rest of Consensus Layer?” or “How much has the balance of the validators changed after 24 hours?” etc. But until recently, they can’t get an answer to questions about rewards, penalties and missed rewards. For this purpose, Lido Automation team developed a feature that produces such metrics and Grafana dashboard that shows them. The post’s main goal is to ask the community what else might be helpful on this dashboard.


The dashboard consists of Node Operator selector and several sections. Let’s look at each in more detail.


This row shows metrics about:

  • Delay between application and beaconchain head
  • Current application epoch
  • Calculation error - shows the difference between the calculated balance difference and the real balance difference for selected node operators. At the current moment, it might be != 0 in slashing and inactive leak cases because they are not yet processed (but we are working on it)

Average chain validator stats

This section shows rewards, penalties, and missed rewards metrics for each consensus layer duty: attestation, sync committee, and propose. They are average values for the whole chain and help to investigate the impact of Lido performance on the whole chain.


This central section shows a summary of all duties.


It’s a compilation of stacked bars and lines with per-epoch values of rewards, penalties and missed rewards in a logarithmic view.

Total earned, rewards, penalties, missed rewards

Each panel shows the total sum of all selected dashboard node operator values.
Total earned is Total rewards - Total penalties.

Total summary

The table with a summary of each selected node operator


This section shows a per-duty separated rewards table for each selected node operator and series with per-epoch values for the selected time period.


This section shows the per-duty separated penalties table for each selected node operator and series with per-epoch values for the selected time period.

Missed rewards

This section shows the per-duty separated missed rewards table for each selected node operator and series with per-epoch values for the selected time period.

We want your opinion

Lido strives to open source useful tools for the cryptocurrency community, and ethereum-validators-monitoring is an example of that.

Leave comments on what else we can implement, and it would help us improve it!


@vgorkavenko Thanks for both amazing feature and decent description!

It is really interesting to see how much rewards were not earned by the validators. I guess we can use {actual_reward}/({actual_reward} + {penalties} + {missed_rewards}) as a new performance score for NOs


Thanks for the great rundown @vgorkavenko. The new :turtle: functionality is super helpful!

RE the proposed rewards metric, think it’s certainly a good idea to have aggregate metrics such as this, but I have a different suggestion regarding the naming, IMO it’s better to keep performance measures strictly related to scoring the effectiveness of execution of validator duties (some of which are not penalized or rewarded) and instead call this something like “reward attainment” / “reward effectiveness” score. Measuring and evaluating both is a very important part of running validators, and I don’t think conflating rewards/profitability with performance (“a validator doing its job”) is optimal.


@vgorkavenko , that’s super interesting. Wouldn’t be “Slashing Penalty” an obvious addition?

Sure. It wasn’t priority in first step (because it’s very rare case), but that is what we will implement, as I said above in description. We are currently monitoring slashes in another dashboard, and “Slashing Penalty” as well as the reward for discovering would be a great addition :+1:


To be clear, no slashing has happened to date on any Lido validators so there’s not much to report in a view that otherwise has data that’s constantly updating and is focused on a relatively short timeframe of epochs.